The Free Internet Project

Blog

Tik Tok is all the rage, so why did India ban it?

Tiktok is a social medial platform owned by a Chinese firm named Bytedance. The app was first developed in China, but is growing more and more popular especially among teens all over the world for its combination of music, dance and peculiar humor through creating and sharing short videos. Another popular feature is live-streaming, which grants real-time interation between the host and the audience. Users do not even need to speak English to become an overnight hit with millions of followers on Tiktok. 

Tiktok has become phenomenal. The idea of producing short clips is not new – Snapchat and Instagram had similar functions too. And creating videos has been around since YouTube. But, with an enormous user base in China, this new contender surpassed other video-sharing sites and gained incredible popularity. Presently, Tiktok has over 500 million active users worldwide. Tiktok's worldwide success as an Internet platform is rare for a Chinese-based company. China’s strict internet restriction is well-known. By putting up firewalls, mainstream Western social media sites, such as Facebook and Twitter, are inaccessible in China. 

India announced the controversial decision to ban TikTok in its borders. Why? As the border clash between China and India escalated, the Indian government recently banned 59 Chinese apps, including Tiktok, citing concerns over activities prejudicial to the sovereignty and integrity of India, according to the New York Times.  Alternative Indian native platforms such as Glance and Roposo are eager to seek new users after Tiktok’s leave, but watchdog groups are concerned that Indian local apps may also be censored and controlled by the government or exploited for political propaganda. While banning Tiktok could also be a token of revenge against China for the border skirmish, the ban could also be viewed as India’s determination in safeguarding its citizens’ data from foreign manipulation.

Taking the cue from India’s decision, the US is considering a ban on Tiktok too. US Secretary of State Mike Pompeo warned the Americans not using the app unless “you want your private information in the hands of the Chinese Communist Party,” indicating the app is secretly sponsoring users’ data to the Chinese government.  

Having a reputation of exercising a tight grip over the internet environment, the Chinese government is frequently accused of privacy breaches. Bytedance, the Chinese firm that owns Tiktok, encountered several challenges as it expanded market worldwide. In February 2020, Bytedance was fined £4.2million by the US Federal Trade Commission for illegally collecting personal information from children under 13 without requiring parent consent. On July 3, 2020, the head of the UK’s Information Commissioner’s Office announced that Tiktok was undergoing a similar investigation regarding protections of children’s personal data as its open message system permit adults to directly contact children and thus subject children to risks such as online solicitations and harassments.

Of course, data breaches in social media are not uncommon in the modern digital age. Facebook was accused multiple times for harvesting users’ private information without their consent. Thus, banning Tiktok in the name of privacy protection sounds extreme since other breaches of data by social media have not resulted in banning an entire platform in a country. 

Some users have expressed a suspicion that the major impetus for the US ban on Tiktok was the significant role that Tiktok played during the BlackLivesMatter rally. In the pandemic era, Tiktok fostered new political expressions. For example, activists who could not march on the street in person, created videos with hashtag #blacklivesmatter to demonstrate cyber solidarity for racial injustice. As CNN reported, users on Tiktok also live-streamed the street protest, documented police assaulting peaceful demonstrators.  Tiktok lowered the barrier of communication, allowing users from all over the globe to share content and exchange ideas. Apart from showing cute dogs, teenagers’ funny dance steps, and other mundane occurrences, Tiktok also entered the political sphere even there is a lack of a number of politicians being active on the site. Despite the alleged privacy and national security concerns, it is one of the fastest and most unfiltered ways for people to spread messages.

“Any kind of public policy response which is premised on grounds of national security needs to emerge from well-defined criteria, which seems to be absent here,” executive director of the Internet Freedom Foundation Mr. Gupta said to the New York Times. Banning may be a quick fix, but if authorities could ban an app in the name of protecting citizens’ data without showing clear evidence supporting the alleged claim or legal authority for such an extreme action, it sets a dangerous precedent that would greatly impair internet freedom. Of course, there remains the tension that popular Western based social media platforms are still banned in China. 

-written by Candice Wang

 

 

 

Meeting Between Facebook, Zuckerberg and Stop Hate for Profit Boycott Group Turns into a Big Fail

Facebook has come under scrutiny due to its handling of hate speech and disinformation posted on the platform. With the Stop Hate for Profit movement, corporations have begun to take steps to hold Facebook accountable for the disinformation that is spread on the platform. So far, more than 400 advertisers, from Coca-Cola to Ford and Lego, have made the pledge to stop advertising on the social media platform, according to NPR. Facebook has faced intense backlash, particularly since the 2016 election, for allowing disinformation and propaganda to be posted freely. The disinformation and hate, or “Fake News” as many may call it, is aimed at misinforming the voters and spreading hateful propaganda, potentially dampening voter participation.

A broad coalition of groups including Color for Change, the Anti-defamation league, and the NAACP, started the campaign Stop Hate for Profit. (For more on the origin, read Politico.) The goal of the campaign is to push Facebook to make much needed changes in its policy guidelines as well as change within the company executive employees. The boycott targets the advertising dollars for which the social media juggernaut relies upon. The campaign has begun to pick up steam with new companies announcing an end to Facebook Ads every day. With this momentum, the group behind the boycott have released a list 10 first steps Facebook can take.   

Stop Hate for Profit is asking that Facebook take accountability, have decency, and provide support to groups most affected by the hate that is spread on the platform. The civil rights leaders behind this movement are focused on making changes at the executive level as well as holding Facebook more accountable for their lackluster terms of service. The top execs currently at Facebook may have a conflict of interests. People contend that Facebook has a duty to make sure misinformation and hate is not spread, but Facebook does not exercise that to the fullest capacity because of their relationships with politicians. Rashad Robinson, president of Color of Change, contends that there needs to be a separation between the people in charge of the content allowed on Facebook and those who are aligned with political figures. The group is asking Facebook to hire an executive with a civil rights background, who can evaluate discriminatory policies and products. Additionally, the group is asking Facebook to expand on what they consider hate speech. The current terms of service that Facebook currently employs are criticized for being ineffective and problematic.   

Facebooks policies and algorithms are among the things the group asks to be changed. Current Facebook policies allow public and private hate groups to exist and also recommend them to many users.  The campaign asks that Facebook remove far-right groups that spread conspiracies, such as QAnon, from the platform. The labeling of inauthentic information that will cause hate and disinformation is also requested. In contrast, Twitter has taken small steps to label hateful content themselves. While many criticize Twitters actions not being far enough, they have taken steps Facebook has yet to take. Through this entire process, Facebook should make transparent to the public all the steps--in the number of ads rejected for hate or disinformation and in the third-party audit of hate spread on the site.  

The group also made a connection between the hate on the Facebook platform and race issues within the company. Stop Hate for Profit, provided a staggering statistic that 42% of Facebook users experience harassment on the platform. This along with the former black employee and two job candidates who filed EEOC complaints points to a culture at Facebook that goes far beyond allowing far-right propaganda and misinformation on the site but highlights a lack of support for users and employees of color. All of this is used to backup why it is essential that Facebook goes beyond making simple statements and actually make steps to create change.

Facebook CEO and cofounder Mark Zuckerberg agreed to meet with the civil rights groups behind the boycott amid the growing number of companies getting behind Stop Profit for Hate. Many have voiced their concerns that Facebook and CEO Zuckerberg are more concerned about messaging that legitimately fixing the underlying problems.  Upon meeting with Mark Zuckerberg on July 7, Stop Hate for Profit released a statement about what they felt was a disappointing and uneventful meeting. The group asserted that Facebook did what they previously feared, only providing surface level rhetoric with no real interest in committing to any real change. Of the ten recommendations, Zuckerberg was only open to addressing hiring a person with a civil rights background. Although he declined to fully commit to that position, if it is created, being a C-suite executive level position. Rashad Robinson tweeted a direct statement, saying that Facebook was not ready to make any changes despite knowing the demands of the group. That view appears to be consistent with a July 2, 2020 report of a remark by Zuckerberg to employees at a virtual town hall: "We're not gonna change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue."

For now, it remains to be seen if the increased pressure from companies pulling advertisements will eventually cause Facebook and Zuckerberg to institute changes that progressive groups have been pushing for years. So far, it appears not.   

--written by Bisola Oni

Can 2020 US Presidential Election Be Canceled: COVID-19, Voting-by-Mail and Other Safeguards During a Pandemic

 

The United States is experiencing another wave in coronavirus infections, with twenty-one states seeing an increase in their daily infection rates. With alarm bells ringing, many have expressed logistical concerns about the upcoming presidential election. There are two main concerns about Election Day 2020, which is November 3, 2020:

  1. Postponement or cancellation of the election; and
  2. Mitigation of the increasing coronavirus infection rate at polling locations.

Canceling or postponing Election Day 2020 is highly unlikely.

The president does not have the legal authority to cancel the November 3rd election. First, federal statute specifes "the Tuesday next after the first Monday in the month of November" shall be federal Election Day. Moreover, the states are the ones who conduct the operations of elections in each state. And only states have the power to change their election laws, according to Jason Harrow, executive director and chief counsel of Equal Citizens.

Rest assured, overstaying one’s welcome at the White House is not possible. The Constitution prevents presidents from remaining in office past their elected term. Under the 20th Amendment, the president’s term automatically ends on January 20th at noon after a four-year term.  If a candidate hasn’t been elected by then, Congress decided long ago the Speaker of the House will become acting president. 

Congress can’t cancel the election either. But Congress can postpone the election by passing a new federal law. Under Article II, § 1 of the Constitution, Congress has the power to determine the date the election takes place. Thus, Congress could pass a bill before November – but it’s unlikely. As Jerry Goldfeder explains, American presidents and legislators have never canceled or postponed any of the fifty-eight previous presidential elections – not for wars, not for terrorist attacks, and not for the Spanish flu. There is little doubt the presidential election will take place. Nonetheless, the logistics are unclear.

How can Americans stay safe and exercise their constitutional right to vote?

It’s reasonable to say Election Day this year will not be postponed or canceled. So, what other options do Americans have? Currently, four ideas are being pushed forward:

  1. Expanding the number of polling places;
  2. Encouraging early voting;
  3. Developing a time span (e.g., two weeks) within which people can vote; and
  4. Expanding voting-by-mail and absentee voting.

After conducting the research, voting-by-mail is the most tenable path forward to mitigate risks of exposure to the coronavirus.

Reports by the Brennan Center for Justice and Leadership Conference on Civil Rights show polling place expansion is unlikely. Since 2012, states around the country have closed nearly two thousand polling places. The pandemic has fanned the flames of this trend. For this to work, states need to open more locations where voters can cast their votes and train a horde of new polling volunteers before November – both of which are not likely.

Early voting would help with social distancing measures by reducing crowd sizes. But some states, like Utah, have canceled their early voting options. As more Americans contract the coronavirus, it is unclear whether this is will be an option for voters in November. Legislators have not yet jumped on the “time-frame” option for in-person voting. The fourth option, voting-by-mail, is the most controversial.

Fact Check: Cases of Voter Fraud in the Voting-by-Mail Context Is Rare.

Twitter sparked controversy when it added a “fact check” label to President Trump’s tweet about California’s expansion of voting-by-mail procedures for the presidential election. In the tweet, President Trump claimed voting-by-mail expansions would lead to “rigged elections.” But Trump himself voted by mail in 2018, and, in the past, Mike Pence, Ivanka Trump, Jared Kushner, Kayleigh McEnany, Bill Barr, Besty DeVos, Larry Kudlow, Wilbur Ross, Kellyanne Conway, and Alex Azar [source]. 

But should Americans be concerned about voter fraud in the context of voting-by-mail?

According to surveys and polls, 72-78% of American voters want the option to vote-by-mail in the upcoming presidential election. At this time, 33 states allow voting-by-mail without an excuse. 5 states conduct all elections by mail.  In response to the pandemic, many states have changed their voting procedures – forty-six states, both Democratic and Republican controlled, now allow voting-by-mail in some form.

According to MIT’s Election Data and Science Lab, voting-by-mail began during the Civil War, where soldiers on both sides cast their ballots from the battlefield. Since then, instances of voter fraud have been rare. Two most infamous cases occurred in Florida (acts of false witnessing) and Georgia (selling votes). More recently, a Republican campaign representative in Maryland got caught collecting blank ballots and filling the ballots out in favor of a former congressional candidate, Mike Harris.

Experts reject the recent hype about voter fraud in the voting-by-mail context. Over the past two decades, 250,000 votes were submitted via mail-in ballots. Based on the Heritage Foundation’s database on voter fraud, only 1,285 instances of voter fraud were found, yielding 1,110 criminal convictions. From the 1,110 convictions, only 204 cases concerned alleged fraudulent use of absentee ballots. You can find a detailed record of every voter fraud case here. This means over twenty years, there have been about ten cases of voter fraud per year.

The risk of voter fraud in this context is only a fraction of a percent, 0.0816%. Context: You are more likely to be struck by lightning in your lifetime, with a 0.033% chance, than to witness “widespread” voter fraud taking place during the 2020 presidential election. Perspective: The chances of dying from coronavirus in the United States is 4.795%.  Of course, with any voting procedure, safeguards need to be implemented. Professor Ned Foley, an election law expert, has identified a need for states to clarify their procedures should a candidate contest the results of mail-in ballots, in a much discussed article "Why Vote-by-Mail Could be a Legal Nightmare in November." Foley recommends: "But states — especially battlegrounds in the presidential election — should clarify as soon as possible the rules that their own courts are supposed to use in litigation that might arise over counting absentee ballots. It is not enough that state law has rules for casting ballots. There needs to be clarity on whether ballots can still count if something has gone wrong in the process of casting of them, especially if the problem is not the voter’s fault."

Standing in long lines, for countless hours, is not a reasonable option this election season because it will certainly increase the spread of the deadly virus. Foley warns: "There’s no question that, for public health reasons, expanding vote-by-mail is a wise decision for states to be making right now."

It’s clear:  Voting-by-mail is not perfect or full-proof. There are rare instances that justify some degree of concern. However, states should allow voting-by-mail due to the higher risk of contracting the coronavirus.​

-written by Allison Hedrick

 

What is Parler? An "unbiased social media"? Or platform for conservative Republicans?

Parler (French for "to talk") is a social media plafform started in 2018. Its mission is to be “an unbiased social media focused on real user experiences and engagement." It is touted as an alternative to Twitter that allows users to post content and comment like Twitter--without political bias. Many Republican politicians who believe Twitter is biased against conservatives, have migrated to Parler and are promoting it as a platform. Since Twitter and Snapchat recently moderated some of Donald Trump’s posts that have violated their community standards, conservative Republicans have switched to Parler. Ted Cruz joined Parler as did three Republican politicians Jim Jordan, Elise Stefanik and Nikki Haley, as CNBC reported. Parler may become Republican lawmaker’s and Trump's favorite social media site. Trump’s campaign manager Brad Parscale accused Twitter and Facebook for biased censorship and stated that the campaign team may select an alternative platform, such as Parlor, as reported by the Wall Street Journal. Parler ranked top news app in Apple’s app store and has 1.5 million users in 2020. By comparison, Twitter has over 145 million active users

Content moderation by Internet platforms has become a hot-button issue. In the past, platforms took permissive approaches in the name of free speech, but they soon realized the need to moderate some objectionable content posted by their user. Most people would agree with the idea that despite the importance of free expression and free flow of information, allowing everyone to post anything online may lead to false, illegal, and harmful content being shared. So Internet companies must exercise some moderation of user content, but the unsolved puzzle is: what the standards should be and who should decide them.

Touted by Republicans, Parler attracted many new users in the past few days. However, some users realized that the new hyped platform was not free of content moderation. Besides restraining the commonly prohibitive content outlined in Parler’s Community Guidelines such as spam, fighting words, pornography and criminal solicitation, Parler also makes clear in its User Agreement: “Parler may remove any content and stop your access to the Services at any time and for any reason or no reason, although Parler endeavors to allow all free speech that is lawful and does not infringe the legal rights of others … Although the Parler Guidelines provide guidance to you regarding content that is not proper, Parler is free to remove content and terminate your access to the Services even where the Guidelines have been followed.”

Some users who are liberals were reportedly banned from Parler. Techdirt compiled some of the user who were banned. Parler's banning of liberal users does not appear to be consistent with its motto as an "unbiased social media."  Even some conservative commentators criticized Parler for not abiding by its privacy policy as it asked for a driver's license from its users. The goal of a politically unbiased Internet platform may be a worthy one. But it remains to be seen whether Parler provides such a space. 

--written by Candice Wang

 

US Senate Intelligence Committee Issues 4th Report on Russian Interference Based on 2017 Intelligence Committee Assessment

 

On April 21, 2020, the U.S. Senate Intelligence Committee published the fourth and penultimate volume of an extensive report on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election. Volume 4 is titled “Review of the Intelligence Community Assessment.” It examines the 2017 Intelligence Committee Assessment (ICA) that determined that Russia interfered with the 2016 U.S. election. The Committee examined the sources and work behind creating the ICA, amidst the controversy over the U.S. intelligence investigation raised by the Trump Administration, which claimed it was politically motivated against him. The Senate Intelligence Committee report repudiates that notioin. In accordance with protecting the identities of the sources behind the ICA, much of the 157-page report has been redacted. The report unanimously concludes that the findings in the 2017 ICA back up the suspected aggressive propaganda campaign and interference conducted by Russia in the 2016 U.S. election.

Volume 4's unredacted factual background and the key findings of the examination of the ICA provide a clear view of the Committee's finding. The Committee's examination of the ICA was conducted by reviewing source documents along with interviews with officials involved in preparing the assessment. In reviewing the ICA, the Senate Intelligence Committee Chairman Richard Burr (R-NC) stated the committee looked at two key issues: (1) whether the final product of the ICA met the task assigned by the President; and (2) if the analysis was backed by the intelligence provided in the ICA. Burr concluded that the analysis and conclusion in the ICA were strong. No reason could be found to dispute the findings by the Intelligence Community. In a statement, Chairman Burr concluded: "The I.C.A. reflects strong tradecraft, sound analytical reasoning and proper justification of disagreement in the one analytical line where it occurred. The committee found no reason to dispute the intelligence community’s conclusions.”

The warnings of continued Russian interference and also the vigilance needed due to continued interference for the 2020 election were reaffirmed in the ICA. Russia sought to interfere with the 2016 election to harm the candidacy of Secretary HIllary Clinton and to help elect then-candidate Donald Trump. The Russian interference did not stop after the 2016 election. Russia has continued to interfere with American democracy and the extent of the aggressive attacks as noted in ICA is not exaggerated, the Senate report determined. Both Chairman Burr and Vice Chairman Mark Warner (D-VA) concluded there is no doubt that Russia will once again interfere in the 2020 election as they did in 2016.

While much of the report is redacted, the key findings validate the 2017 ICA:

  1. The ICA met the task assigned by the President; providing a bipartisan analysis of the Russian interference. None of the individuals involved in the drafting of the ICA were biased to reach specified conclusions by a particular political party, because of this the ICA provided an accurate reporting of the Russian interference during the 2016 election.
  2. The aggressive and unprecedented nature of Russia’s interference on the 2016 election is well documented in the ICA. The reporting does not far exceed its reach, with regards to warning policy makers of Russia’s role.
  3. It is important to note that within the ICA no policy recommendation was made in regard to combatting future Russian interference. While it is all but assured that Russia will again interfere in 2020, no recommendations are given. The lack of policy suggestion is intentional. The intelligence committee themselves do not make any policy, they are simply set aside to provide detailed analysis and warnings to those in place to lawfully create policy.
  4. It is noteworthy that the ICA does not include much information about any attempt that Russia made to interfere with the election in 2008 and 2012. Instead making it clear the actions in 2016 were unprecedented.

The Committee did find that the ICA fell short in certain areas. While the ICA provided a comprehensive insight about 2016 and warnings about Russia’s continued interference, they did not delve into Russian propaganda through state-owned media platforms. The lack of insight into media networks such of RT’s coverage on the leaks from Wikileaks’ containing information about the Democratic National Committee would have aided in further substantiating the full brevity of the Russian propaganda.  

Volume 4 further substantiates the need for greater steps to stop Russian interference in the U.S. election. A fifth and final volume is expected by the Committee as part of its report on the Russian interference on the election.

-written by Bisola Oni

Why Voters Should Beware: Lessons from Russian Interference in 2016 Election, Political and Racial Polarization on Social Media

Overview of the Russian Interference Issue

The United States prides itself on having an open democracy, with free and fair elections decided by American voters. If Americans want a policy change, then the remedy most commonly called upon is political participation--and the vote. If Americans want change, then they should vote out the problematic politicians and choose public officials to carry out the right policies. However, what if the U.S. voting system is skewed by foreign interference? 

American officials are nearly unanimous in concluding, based on U.S. intelligence, that Russia interfered with the 2016 presidential elections [see, e.g., here; here; and Senate Intelligence Report].  “[U]ndermining confidence in America’s democratic institutions” is what Russia seeks. In 2016, few in the U.S. were even thinking about this type of interference. The US’s guard was down. Russia interfered with the election in various ways including fake campaign advertisements, bots on Twitter and Facebook that pumped out emotionally and politically charged content, and through spread of disinformation or “fake news.” Social media hacking, as opposed to physical-polling-center hacking, is at the forefront of discussion because it can not only change who is in office, but it also can shift American voters’ political beliefs and understanding of political topics or depress voters from voting. 

And, if you think Russia is taking a break this election cycle, you'd be wrong. According to a March 10, 2020 New York Times article, David Porter of the FBI Foreign Influence Task Force says: "We see Russia is willing to conduct more brazen and disruptive influence operations because of how it perceives its conflict with the West."

What Inteference Has to Do with Political Polarization

Facebook and Twitter have been criticized countless times by various organizations, politicians, and the media for facilitating political polarization. The U.S. political system of mainly two dominamnt parties is especially susceptible to political polarization. Individuals belonging to either party become so invested in those party’s beliefs that they do not just see the other party’s members as different but also wrong and detrimental to the future of the country. In the past twenty years, the amount of people who consistently hold conservative views or liberal views went from 10% to 20%, thus showing the increasing division, according to an article in Greater Good Magazine.

Political polarization is facilitated by platforms like Facebook and Twitter because of their content algorithms, which are designed to make the website experience more enjoyable. The Facebook News Feed “ranks stories based on a variety of factors including their history of clicking on links for particular websites,” as described by a Brookings article. Under the algorithm, if a liberal user frequently clicks on liberally skewed content, that is what they are going to see the most. Research shows this algorithm reduced the cross-cutting of political “content by 5 percent for conservatives and 8 percent for liberals.” Thus, the algorithm limits your view of other opinions.

So, you might ask, “Why is that bad? I want to see content more aligned with my beliefs.” Democracy is built on the exchange of varying political views and dissenting opinions. The US has long stood by the reputation of freedom of speech and encouraging a free flow of ideas. This algorithmic grouping of like-minded people can be useful when it comes to hobbies and interests, however when it comes to consistently grouping individuals based on political beliefs, it can have a negative impact on democracy. This grouping causes American users to live in “filter bubbles” that only expose them to content that aligns with their viewpoints. Users tend to find this grouping enjoyable due to the psychological theory of confirmation bias, which means that individuals are more likely to consume content that aligns with their pre-existing beliefs. So, all the articles about Trump successfully leading the country will be the first ranked on a conservative user’s Facebook newsfeed and will also be the most enjoyable for them. This filter bubble is dangerous to a democratic system because the lack of diverse perspectives when consuming news content encourages close-mindedness and increases distrust in anyone who disagrees.

During the 2016 presidential election, the Russian hackers put out various types of fake articles, campaign advertisements, and social media posts that were politically charged on either the liberal or conservative side. Because the Facebook algorithm shows more conservative content to conservatives and same for liberals, hackers had no problem reaching their desired audience quickly and effectively. On Facebook they created thousands of robot computer programs that would enter various interest groups and engage with their target audience. For example, in 2016, a Russian soldier successfully entered a U.S. Facebook group pretending to be a 42-year-old housewife, as reported by Time. He responded to political issues discussed on that group and he used emotional and political buzz words when bringing up political issues and stories. On Twitter, thousands of fake accounts run by Russians and computer robots were used to spread disinformation about Hillary Clinton by continuously mentioning her email scandal from when she was Secretary of State and a fake Democratic pedophilic ring called “Pizzagate.” These robots would spew hashtags like “#MAGA” and “#CrookedHillary” that took up more than a quarter of the content within these hashtags.

Facebook and Twitter’s Response to the 2016 Russian Interference

According to a Wall Street Journal article on May 26, 2020 and a Washington Post article on June 28, 2020, Facebook had an internal review of how Facebook could reduce polarization on its platform following the 2016 election, but CEO Mark Zuckerberg and other executives decided against the recommended changes because it was seen as "paternalistic" and would potentially affect conservatives on Facebook more. 

After becoming under increasing fire from critics for allowing misinformation and hate speech to go unchecked on Facebook, the company announced some changes to "fight polarization" on May 27, 2020. This initiative included a recalibration of each user’s Facebook News Feed which would prioritize their family and friends’ content over divisive news content. Their reasoning was that data shows people are more likely to have meaningful discourse with people they know, and this would foster healthy debate rather than ineffective, one-off conversations. They also mentioned a policy directly targeting the spread of disinformation on the platform. They say they have implemented an independent-fact-checking program that will automatically check content in over 50 languages around the world for false information.  Disinformation that will potentially contribute to “imminent violence, physical harm, and voter suppression,” will be removed. 

But those modest changes weren't enough to mollify Facebook's critics. Amidst the mass nationwide protests of the Minneapolis police officer Derek Chauvin's brutal killing of George Floyd, nonprofit organizations including Color for Change organized an ad boycott against Facebook. Over 130 companies agreed to remove their ads from Facebook during July or longer. That led Zuckerberg to change his position on exempting politicians from fact checking or the company's general policy on misinformation. Zuckerberg said that politicians would now be subject to the same policy as every other Facebook user and would be flagged if they disseminated misinformation (or hate speech) that violates Facebook's general policy. 

Twitter’s CEO Jack Dorsey not only implemented a fact-checking policy similar to Facebook, but also admitted that the company needed to be more transparent in their policy making. The fact checking policy “attached fact-checking notices” at the bottom of various tweets alerting users that there could be fake claims in those tweets.  Twitter also decided to forbig all political advertising on its platform. In response to Twitter's flagging of his content, President Trump issued an executive order to increase social media platform regulation and stop them from deleting users’ content and censoring their speech.

With the 2020 U.S. election only four months away, Internet companies are still figuring out how to stop Russian interference and the spread of misinformation, hate speech, and political polarization intended to interfere with the election. Whether Internet companies succeed remains to be seen.  But there's been more policy changes and decisions by Facebook, Twitter, Reddit, Snapchat, Twitch, and other platforms in the last month than all of last year. 

-by Mariam Tabrez

Summary of EARN IT and proposed bills to amend Section 230 of CDA regarding ISP safe harbor

 

Section 230 of the Communications Decency Act of 1998 has come under fire in the U.S. Congress. Republican lawmakers contend that Section 230 is being invoked by Internet platforms, such as Facebook, Google, and Twitter, as an improper shield to censor content with a bias against conservative lawmakers and viewpoints. These lawmakers contend that Section 230 requires Internet sites to maintain "neutrality" or be a "neutral public forum." However, some legal experts, including Jeff Kosseth who wrote a book on the legislative history and subequent interpretation of Section 230, contend this interpretation is a blatant misreading of Section 230, which specifically creates immunity from civil liability for ISPs for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." Donald Trump issued an Executive Order that attempts to (re)interpret "good faith" to require political neutrality.  The Department of Justice appeared to concede, however, that "good faith" is unclear and recommended that Congress provide a statutory definition of the term.  Several Republican lawmakers in the House and the Senate have proposed new legislation that would reform or eliminate Section 230, and limit Internet platforms’ ability to censor content that the platforms feel is harmful, obscene, or misleading.  This article summarizes the proposed bills to amend Section 230. 

1. Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 (EARN IT Act, S.3398): no immunity for violation of law on child sexual abuse material unless ISP earns back via best practices

The EARN IT Act was introduced by Senator Lindsey Graham (R-SC) and co-sponsored by Senator Richard Blumenthal (D-CT). The EARN IT Act’s main purpose is to carve out from the ISP immunity under Section 230(c)(2)(A) and thus to expose ISPs to potential civil liability pursuant to 18 U.S.C. section 2255 or state law based on activity that violates 18 U.S.C section 2252 or 2252A (which cover child sexual abuse material (CSAM) distribution or receipt). However, an ISP can "EARN" back its immunity if it follows the requirement's of the Act's newly created safe harbor:

  • "(i) an officer of the provider has elected to certify to the Attorney General under section 4(d) of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 that the provider has implemented, and is in compliance with, the child sexual exploitation prevention best practices contained in a law enacted under the expedited procedures under section 4(c) of such Act and such certification was in force at the time of any alleged acts or omissions that are the subject of a claim in a civil action or charge in a State criminal prosecution brought against such provider; or
  • “(ii) the provider has implemented reasonable measures relating to the matters described in section 4(a)(3) of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020, subject to the exceptions authorized under section 4(a)(1)(B)(ii) of that Act, to prevent the use of the interactive computer service for the exploitation of minors.”

To develop the "child sexual exploitation prevention best practices" required for the new safe harbor, the EARN IT Act would create a commission called the “National Commission on Online Child Sexual Exploitation Prevention,” consisting of sixteen members. The Commission’s duty would be to devise a list of “best practices” for combatting child sexual abuse material (CSAM) and send the list to Attorney General William Barr, the Secretary of Homeland Security, and the Chairman of the Federal Trade Commission—all of whom would be appointed as members of the Commission—for review. These three members, dubbed the “Committee,” would have the power to amend, deny, or approve the list of “best practices” created by the Commission. After the Committee approves a list of “best practices,” the list is sent to Congress, which has ninety days to file a “disapproval motion” to veto the list from going into effect. 

Text of EARN IT Act (S. 3398)

Sponsors Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT)

UPDATED July 4, 2020: The Senate Judiciary Committee unanimously approved the bill (22-0).  It now will be considered by the Senate. 

2. Limiting Section 230 Immunity to Good Samaritans Act: creates civil action against edge providers for "intentionally selective enforcement" of content moderation

In June 2020, Sen. Josh Hawley (R-MO) introduced a bill titled Limiting Section 230 Immunity to Good Samaritans Act. The bill defines a "good faith" requirement in Section 230 for content moderation by a newly defined category of "edge providers," Internet platforms with more than 30 million users in the U.S. or more than 300 million users worldwide, plus more than $1.5 billion in annual global revenue. It does not include 501(c)(3) nonprofits. The bill defines good faith so it does not include "intentionally selective enforcement of the terms of service," including by an algorithm that moderates content. The term is vague. Presumably, it is meant to cover politically biased moderation (see Ending Support for Internet Censorship Act below), but it might apply to situations that ISPs selectively enforce their policies simply because of the enormous amount of content (billions of posts) on their platforms in a kind of triage. The bill also creates a cause of action for users to sue Internet platforms that intentionally selectively enforce and to recover $5,000 in statutory damages or actual damages.  

Text of Limiting Section 230 Immunity to Good Samaritans Act

Sponsors: Sen. Josh Hawley (R-MO); Sens. Marco Rubio (R-FL), Mike Braun (R-IN), Tom Cotton (R-AR); Sen. Kelly Loeffler (R-GA)

3.  Ending Support for Internet Censorship Act (“Hawley Bill," S.1914): ISPs must get "immunity certification" from FTC that ISP doesn't moderate content in "politically biased manner"

The Hawley bill, Ending Support for Internet Censorship Act, introduced by Senator Josh Hawley (R-MO) and co-sponsored by Senators Marco Rubio (R-FL), Mike Braun (R-IN), and Tom Cotton (R-AR), seeks to require ISPs to obtain an "immunity certification from the Federal Trade Commission"; the certiication requires the ISP "not [to] moderate information provided by other information content providers in a manner that is biased against a political party, political candidate, or political viewpoint."  The ISP must "prove[] to the Commission by clear and convincing evidence that the provider does not (and, during the 2-year period preceding the date on which the provider submits the application for certification, did not) moderate information provided by other information content providers in a politically biased manner."

The bill defines "politically biased moderation" as:

POLITICALLY BIASED MODERATION.—The moderation practices of a provider of an interactive computer service are politically biased if—

  • “(I) the provider moderates information provided by other information content providers in a manner that—
  • “(aa) is designed to negatively affect a political party, political candidate, or political viewpoint; or
  • “(bb) disproportionately restricts or promotes access to, or the availability of, information from a political party, political candidate, or political viewpoint; or
  • “(II) an officer or employee of the provider makes a decision about moderating information provided by other information content providers that is motivated by an intent to negatively affect a political party, political candidate, or political viewpoint."

Text of ESICA (S. 1914)

Sponsor: Senator Josh Hawley (R-MO)

4. Stop the Censorship Act (“Gosar Bill,” H.R.4027): removes "objectionable" from Good Samaritan provision for content moderation, limiting it to "unlawful material"

The Gosar billl, Stop the Censorship Act, seeks to eliminate Section 230 immunity for Internet platforms like Facebook, Google, and Twitter, for censoring content that the platforms deem “objectionable.” US Representative Paul Gosar (R-AZ), joined by fellow Conservative Congressmen Mark Meadows (R-NC), Ralph Norman (R-SC), and Steve King (R-IA), believe the language of Section 230's Good Samaritan blocking is too broad. The Gosar bill would strike language in Section 230 that allows Internet platforms to censor content deemed “objectionable”; the only content that should be censored, the sponsors argue, is “unlawful” content (i.e. CSAM). Further, the bill would establish an option for platform users to choose between a safe space on the platform (would feature content-moderated feeds controlled by the platform) and an unfettered platform (would include all objectionable content).  The bill would change Section 230(c)(2) as follows:

Current: 

(2) Civil liability. No provider or user of an interactive computer service shall be held liable on account of— (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or..."

Proposed change:

(2) Civil liability. No provider or user of an interactive computer service shall be held liable on account of—(A) any action voluntarily taken in good faith to restrict access to or availability of unlawful material;

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1); and

(C) any action taken to provide users with the option to restrict access to any other material, whether or not such material is constitutionally protected.”

Text of SCA (H.R.4027)

Sponsors: Rep. Gosar. Cosponsors: Rep. Mark Meadows (R-NC), Rep. Steve King (R-IA), Rep. Ralph Norman (R-SC), Rep. Ted Yoho (R-FL), Rep. Ron Wright (R-TX), Rep. Glenn Grothman (R-WI)

5. Stopping Big Tech Censorship Act (Sen. Kelly Loeffler (R-GA):  adds conditions to both Section 230(c)(1) and (c)(2 immunities, including subjecting content moderation of Internet platforms to First Amendment-style limitations on government restrictions of speech

US Senator Kelly Loeffler (R-GA) recently introduced the “Stopping Big Tech’s Censorship Act,” which would amend language in Section 230 of the Communications Decency Act to “protect First Amendment Rights” of users on social media platforms.

The first change is to the immunity in Section 230(c)(1). The bill would require Internet platforms to "take[] reasonable steps to prevent or address the unlawful use of the interactive computer service or unlawful publication of information on the interactive computer service,’’ in order to qualify for the immunity from defamation and other claims based on the content of their users.

The second change is to the immunity in Section 230(c)(2). Internet platforms will only enjoy Section 230(c) immunity for their content moderation if: “(I) the action is taken in a viewpoint-neutral manner; (II) the restriction limits only the time, place, or manner in which the material is available; and (III) there is a compelling reason for restricting that access or availability.” This set of requirements is substantial and might be hard to put into place with current community standards.  For example, removing hate speech, white supremacist propaganda, neo-Nazi content, racist speech, and other offensive content might be viewed as viewpoint discrimination under this approach.

Duty to take reasonable steps to moderate unlawful content. Loeffler's bill also adds a requirement that the Internet platforms "take reasonable steps to prevent or address the unlawful use of the interactive computer service or unlawful publication of information on the interactive computer service."

Disclosure of policies. Further, the bill requires Internet platforms to disclose their content moderation policy: “(A) a provider of an interactive computer service shall, in any terms of service or user agreement produced by the provider, clearly explain the practices and procedures used by the provider in restricting access to or availability of any material; and (B) a provider or user of an interactive computer services that decides to restrict access to or availability of any material shall provide a clear explanation of that decision to the information content provider that created or developed the material.”

Text of SBTCA

-written by Adam Wolfe

Over 130 Companies Remove Ads from Facebook in #StopHateforProfit Boycott, forcing Mark Zuckerberg to change lax Facebook policy on misinformation and hate content

In the aftermath of Cambridge Analytica scandal in which the company exploited Facebook to target and manipulate swing voters in the 2016 U.S. election, Facebook did an internal review to examine the company's role in spreading misinformation and fake news that may have affected the election, as CEO Mark Zuckerberg announced. In 2018, Zuckerberg announced that Facebook was making changes to be better prepared to stop misinformation in the 2020 election. Critics criticized the changes as modest, however. As WSJ reporters Jeff Horwitz and Deepa Seetharaman detailed, Facebook executives largely rejected the internal study's recommendations to reduce polarization on Facebook. Doing so might be "paternalistic" and might open Facebook up to criticisms of being biased against conservatives.

Despite the concerns about fake news and misinformation affecting the 2020 election, Facebook took the position that fact checking for misinformation did not apply to the posts and ads by politicians in the same way as they applied to everyone else. Facebook's policy was even more permissive to political ads and politicians. As shown below, Facebook justified this hands-off position as advancing political speech: "Our approach is grounded in Facebook's fundamental belief in free expression, respect for the democratic process, and the belief that, especially in mature democracies with a free press, political speech is the most scrutinized speech there is. Just as critically, by limiting political speech we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words."

Facebook's Fact-Checking Exception for Politicians and Political Ads

By contrast, Twitter CEO Jack Dorsey decided to ban political ads in 2019 and to monitor the content politicians just as Twitter does with all other users for misinformation and other violations of Twitter's policy.  Yet Zuckerberg persisted in his "hands off" approach: "“I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online." Zuckerberg even said Twitter was wrong to add warnings to two of President Trump's tweets as misleading (regarding mail-in ballots) and glorifying violence (Trump said, "When the looting starts, the shooting starts" regarding the protests of the Minneapolis police Derek Chauvin killing of George Floyd)  Back in October 2019, Zuckerberg defended his approach in the face of withering questioning by Rep. Alexandria Ocasio-Cortez. 

 

In May and June 2020, Zuckerberg persisted in his "hands off" approach. Some Facebook employees quit in protest, while others staged a walkout.  Yet Zuckerberg still persisted. 

On June 17, 2020, Color of Change, which is "the nation’s largest online racial justice organization," organized with NAACP, Anti-Defamation League, Sleeping Giants, Free Press, and Common Sense Media a boycott of advertising on Facebook for the month of July. The boycott was labeled #StopHateforProfit. Within just 10 days, over 130 companies joined the ad boycott of Facebook.  It included many large companies, such as Ben and Jerry's, Coca-Cola, Dockers, Eddie Bauer, Levi's, The North Face, REI, Unilver, and Verizon. 

On June 26, 2020, Zuckerberg finally announced some changes to Facebook's policy.  The biggest changes:

(1) Moderating hateful content in ads. As Zuckerberg explained on his Facebook page, "We already restrict certain types of content in ads that we allow in regular posts, but we want to do more to prohibit the kind of divisive and inflammatory language that has been used to sow discord. So today we're prohibiting a wider category of hateful content in ads. Specifically, we're expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others. We're also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them."

(2) Adding labels to posts, including from candidates, that may violate Facebook's policy. As Zuckerberg explained, "Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

"We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society -- but we'll add a prompt to tell people that the content they're sharing may violate our policies.

"To clarify one point: there is no newsworthiness exemption to content that incites violence or suppresses voting. Even if a politician or government official says it, if we determine that content may lead to violence or deprive people of their right to vote, we will take that content down. Similarly, there are no exceptions for politicians in any of the policies I'm announcing here today." 

Facebook's new labeling of content of candidates sounds very similar to what Zuckerberg criticized Twitter as being wrong. And Facebook's new policy on moderating hateful content in ads that "are a threat to the physical safety, health or survival of others," including "people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status," seems a positive step to prevent Facebook being a platform to sow racial discord, which is a goal of Russian operatives according to U.S. intelligence. 

Facebook new policy on moderation of political ads and posts by politicians and others

The organizers of the boycott, however, were not impressed with Facebook's changes. They issued a statement quoted by NPR: "None of this will be vetted or verified — or make a dent in the problem on the largest social media platform on the planet. We have been down this road before with Facebook. They have made apologies in the past. They have taken meager steps after each catastrophe where their platform played a part. But this has to end now."

 

Trump Campaign Snaps at Being Removed from Snapchat's Discover Page

On June 3, 2020, Snapchat decided to stop promoting the Snapchat account of Donald Trump on its Discover page, which provides a feed of stories from celebrities and other popular profiles that are curated by Snapchat for its users.

Apple Pay under Scrutiny: Calls for Contactless Payments Give Rise to Competition Concerns

Skip to main content Navbar items Home Menu Shortcuts updater Administration menu Content Structure Appearance People Modules Configuration Reports Vertical orientation Edit Blog Post Apple Pay under Scrutiny: Calls for Contactless Payments Give Rise to Competition ConcernsHome Apple Pay under Scrutiny: Calls for Contactless Payments Give Rise to Competition ConcernsPrimary tabsViewEdit(active tab)RevisionsDevel Title * Apple Pay under Scrutiny: Calls for Contactless Payments Give Rise to Competition Concerns Summary (Hide summary) Leave blank to use trimmed value of full text as the summary. Body

Calls for safer transactions during the novel coronavirus pandemic have contributed to a 150% increase in the use of contactless payments since 2019. Experts estimate that mobile payment transactions will total $161.41 billion by 2021. Given the increased need for contactless payments, the American House of Representatives and Department of Justice, as well as the European Union’s European Commission, have launched investigations into Apple’s Apple Pay service. These investigations seek to evaluate Apple’s “power and potential anticompetitive behavior.”

According to a Washington Post article, two competition concerns drive the investigations into Apple Pay:

  1. Apple’s exclusive control over iPhones’ “near-field communication” technology, essentially a chip in the iPhone that allows consumers to use their phones to pay at store checkout counters; and
  2. Apple’s forced terms of agreement to merchants who accept Apple Pay. 

Apple in the Antitrust Spotlight

Antitrust laws are “rules of the competitive marketplace.” These laws protect consumers from destructive business practices, according to the Federal Trade Commission. The chief goal of antitrust laws is to encourage aggressive competition among sellers to give consumers “the benefits of lower prices, higher quality products and services, more choices, and greater innovation.”

Apple Pay – A Monopoly on Mobile Contactless Payments?

 

Apple Pay debuted in 2014, launching the mobile contactless payment market. Currently, Apple Pay controls the largest share of the mobile contactless payment market in the United States. It’s no surprise that Apple has collected a horde of rivals. Rivals’ antitrust complaints target Apple Pay and Apple Store. 

Apple Pay allows consumers to use their iPhones to pay for goods at participating stores. To make this work, Apple Wallet, a digital wallet, stores a digitized version of a consumer’s credit or debit card. Then, when at the checkout counter, a “near-field communication” (NFC) chip, located within the iPhone, communicates with the store’s contactless terminal.

“Viola! Look ma, no wallet!” The days of searching pockets and purses are over. Convenient, right?

Well, maybe not. The problem: Apple’s NFC chip is “closed.” So, only cards stored in a consumer’s Apple Wallet can access the chip and use the contactless payment feature. As a result, card issuers must enter into an Apple Pay agreement to permit its customers to make mobile contactless payments with their credit and/or debit cards. As reported in the Guardian, according to American and European government officials, this means denying consumers access to better quality, innovation, and competitive prices. Rivals allege the “closed” chip stifles iPhone users’ ability to pick from other mobile contactless payment services such as Google Pay and Microsoft Wallet, among others. These rivals allege Apple Pay significantly interferes with their ability to compete in the mobile contactless payment market.

In 2019, the Department of Justice began an unofficial antitrust investigation into Apple to determine if it has engaged in anti-competitive business practices. In a public statement, the DOJ announced it would review “whether and how market-leading online platforms have achieved market power and are engaging in practices that have reduced competition, stifled innovation, or otherwise harmed consumers.”

Last year, the House of Representatives launched a broad investigation into four major tech companies, including Apple.  The House has requested extensive documentation and Tim Cook’s presence at an antitrust committee hearing. It remains uncertain whether Cook will attend. Apple is also in the EU’s competition law crosshairs. France already fined Apple this past March nearly $1.2 billion for antitrust violations. While European Commission’s investigation largely targets the Apple Store, it is also investigating Apple Pay for the restriction placed on the iPhone’s NFC chip. Companies like PayPal allege this restriction stifles iPhone users from using rival payment options.

Although American and European officials have not completed their investigations into Apple Pay, it is likely more lawsuits against Apple are on the horizon. Until then, experts like Tim Derdenger, a professor at Carnegie Mellon’s Tepper School of Business, urge legislators to act sooner rather than later.

-by Allison Hedrick

Pages

Blog Search

Blog Archive

Categories