The Free Internet Project

Blog

The Twitter Hack: What Preliminary Investigations Have Revealed

What happened: Hackers accessed a slew of Twitter accounts to sell; took control of high-profile accounts to Tweet links to a Bitcoin scam.

In a recent blog post, Twitter admitted that its platform was hacked last Wednesday, July 15, 2020. Twitter alleged hackers engaged in a “social engineering scheme” to access its internal tools. Twitter defined “social engineering” as “the intentional manipulation of people into performing certain actions and giving out their personal information.”

Ultimately, hackers accessed 130 Twitter accounts. The hackers were able to reset the password for 45 accounts; they then logged into those accounts and Tweeted out cryptocurrency "bitcoin" scams. The hacking scheme escalated just before 3:30 p.m. on July 15, 2020. According to a New York Times’ investigation, certain cryptocurrency company elites’ accounts began asking for Bitcoin donations to a website called “cryptoforhealth.” The Bitcoin wallet set up to receive the donations was none other than the wallet “Kirk” had been using all day. “Kirk” then started tweeting out links from celebrities’ and tech giants’ accounts which told users to send money to a Bitcoin account and in return, the amount would be doubled.

According to one investigation by Krebs, the Bitcoin account processed 383 transactions; according to NYT, 518 transactions were processed worldwide. It wasn’t until around 6 p.m. when Twitter put a stop to the scam messages. Twitter’s blog post stated: “We’re embarrassed, we’re disappointed, and more than anything, we’re sorry.” Once the hacks were detected, Twitter “secured and revoked access to internal systems,” restricted the functionality of many Twitter accounts – preventing Tweeting and password changes, and locked accounts when there was a recent password change.

What was accessed?

Twitter assured its users for all but the 130 hacked accounts, no personal information was compromised. However, it is likely the hackers saw the users’ personal information, like phone numbers, email addresses. For the 45 accounts that were taken over by the hackers, more information was compromised – but Twitter did not state what information that could be. The hackers downloaded user's information, such as a summary of the user’s activity and account details, for eight accounts. It is unclear which eight accounts were affected at this time.

Investigators are trying to identify the hackers – foreign state interference is not suspected.

Investigators are trying to figure out if a Twitter employee was involved or whether, as Twitter claimed, the hacking was orchestrated by social engineering, where one individual posed as a trusted employee to gain credentials and account access. The Federal Bureau of Investigation said, "the accounts appear to have been compromised in order to perpetuate cryptocurrency fraud.”  U.S. senators have demanded Twitter submit a brief by July 23, 2020. New York Governor Andrew Cuomo announced the state will conduct a full investigation.  

According to an exclusive New York Times interview with four of the culprits, the organized hacking scheme was not politically motivated, despite targeting some political and corporate elites. The New York Times verified the hackers’ identities – “lol,” “ever so anxious,” and two others – through matching their social media and cryptocurrency accounts. The hackers also provided photos of their chat logs. Another source Krebs identified another key player in the Twitter Hack “PlugWalkJoe.” Investigators have confirmed some of the information relayed to the New York Times’ exclusive interview. “lol” is a 20-something, living on the United States’ West Coast. “ever so anxious” is 19, living with his mother in the South of England. Both are well-known gamers on OGusers.com. “PlugWalkJoe,” whose name is Joseph O’Connor, is 21, British, and was in Spain when the Twitter hack scheme started. Mr. O’Connor insists he played no part in Wednesday’s events. Alternatively, “Kirk” was unknown before Wednesday’s Twitter Hack – and his real identity is still under investigation.

The scheme began with messages the previous Tuesday night between two hackers, “Kirk” and “lol.” “Kirk” reached out to “lol,” alleging he worked at Twitter and demonstrated he could take control of valuable Twitter accounts. The hackers claim they were not part of a foreign interference plot – they are a bunch of young people, one still living with his mom – obsessed with owning early or unusual user names having one letter or number, such as @y or @6.  But “lol” told the New York Times he suspected “Kirk” did not work at Twitter because he was “too willing to damage the company.”

Regardless, “Kirk” could take control of almost any Twitter account, including former President Obama, former Vice President and the Democratic presidential nominee, Joseph R. Biden, Elon Musk,  and other celebrities. The BBC reported that other elites’ accounts were hacked too, like Bill Gates, Kim Kardashian, Kanye West, Apple, and Uber. Another source Krebs adds Jeff Bezos, former New York Mayor Michael Bloomberg, and Warren Buffett to the list.

Prestige is King – Four hackers were inspired by an obsession with “OG user names.”

According to the hackers, “Kirk” directed the group’s efforts. However, two hackers, “lol” and “ever so anxious,” told the New York Times they sought the prestige of owning an original user name. The two claim they only helped “Kirk” by facilitating the purchases and takeovers of OG, or “original gangster,” user names earlier Wednesday. In their interview, the four hackers insisted they parted ways with “Kirk” before he started taking over higher-profile accounts. In the online gaming world, certain user names associated with the launch of a new online platform – so-called OG user names – are highly desired. These prestigious user names are snagged by the earliest users of the new platform. Many latecomers to the platform want the credibility of the OG user names, and will often pay big bucks to get one.

Wednesday’s hacking scheme began with a plan to commandeer and sell OG user names. “Kirk” asked “lol” and “ever so anxious” to act as middlemen for the sale of some Twitter OG user names. “Kirk” promised the other two would get a cut of each transaction they secured. For example, the first “deal” “lol” brokered included a person offering $1500 in Bitcoin for the “@y” user name. The group posted an advertisement on OGusers.com and customers poured in. The group sold user names like @b, @dark, @l, @R9, @vague, @w, @6, and @50. One buyer, and possible culprit, “PlugWalkJoe,” bought the “@6” user name from “ever so anxious,” while “ever so anxious” commandeered the user name “@anxious.” Nearly all the transactions that occurred in relation to the Twitter Hack went into one Bitcoin wallet, predominately used by “Kirk” throughout the day.

Election Day 2020 Concerns

Because high-profile politicians’ accounts were compromised in Wednesday’s Twitter Hack, many express concerns about potential disinformation campaigns closer to November 3rd. These concerns are exacerbated by the fact Twitter did not detect the hacking scheme for hours after the hacks started. While U.S. and state government officials have sought to protect voting systems against potential hacking, Wednesday’s chaos has shown us that efforts to protect the security of the upcoming presidential election might need renewed attention. The investigations into the Twitter Hack are still ongoing, and many details remain unclear.

written by Allison Hedrick

Schrems II: EU Court of Justice strikes down US-EU "Privacy Shield," which allowed businesses to transfer data despite lower privacy protections in US

 On July 16, 2020, the European Union’s top court, the Court of Justice, struck down the trans-Atlantic data privacy transfer pact in a case called Schrems II. The agreement bewteen the US and EU known as the Privacy Shield, allows businesses to transfer data between the United States and European Union, even though U.S. privacy laws do not meet the higher level of data protection of EU law. Data transfer is essential for businesses that rely on the pact to operate their businesses across the Atlantic. For example, multi-national corporations routlinely obtain shipping consumer data from the EU for further use in the US. The Court of Justice ruled that the transfer of data leaves European citizens exposed to US government surveillance and did not comply with EU data privacy law. The Court explained: "although not requiring a third country to ensure a level of protection identical to that guaranteed in the EU legal order, the term ‘adequate level of protection’ must, as confirmed by recital 104 of that regulation, be understood as requiring the third country in fact to ensure, by reason of its domestic law or its international commitments, a level of protection of fundamental rights and freedoms that is essentially equivalent to that guaranteed within the European Union by virtue of the regulation, read in the light of the Charter."

Companies in the U.S. can work out privacy protections by contract, but such contracts also must comply with EU privacy standards. The Court explained: "the assessment of the level of protection afforded in the context of such a transfer must, in particular, take into consideration both the contractual clauses agreed between the controller or processor established in the European Union and the recipient of the transfer established in the third country concerned and, as regards any access by the public authorities of that third country to the personal data transferred, the relevant aspects of the legal system of that third country, in particular those set out, in a non-exhaustive manner, in Article 45(2) of that regulation."

Ars Technica explains the origins of Privacy Shield and the troubles that have long existed with the agreement. Prior to Privacy Shield being adopted, the agreement governing the sharing of consumer data across the Atlantic was called the Safe Harbor. In 2015, the Safe Harbor was invalidated after being challenged by Maximillian Schrems, an Austrian privacy advocate, because it conflicted with EU law. After the Safe Harbor was struck down by the Court of Justice, EU lawmakers and the US Department of Commerce negotiated the Privacy Shield, which went effect in 2016. But many in the EU questioned its validity and lawfulness.

In Schrems II, the Court of Justice agreed. According to Axios, Schrems complained that the clause in Facebook's data contract was insufficient to protect Europeans from US government surveillance. The Court agreed, ruling that once the data entered the US, it was impossible to adequately ensure the protection of the data.  European citizens would have no redress in the US for violations of the EU standards of privacy. The Privacy Shield did not provide equivalent privacy protection. 

So what happens next? EU and the US officials must negotiate a new data sharing agreement between the EU and the US that can be equivalent to the level of privacy protection in the EU. Tech companies like Google and Facebook have issued assurances that this decision will not affect their operations in Europe because the companies have alternative data-transfer contracts, according to Ars Technica. It remains to be seen whether a new transatlantic data sharing agreement can be reached in a way that comports with EU privacy law.

-written by Bisola Oni

ProPublica, FirstDraft study: Nearly 50% of Top Performing Facebook Posts on Mail-In Ballots Were False or Misleading

ProPublica and FirstDraft conducted a study of "Facebook posts using voting-related keywords — including the terms 'vote by mail,' 'mail-in ballots.' 'voter fraud' and 'stolen elections' — since early April [2020]." According to ProPublica, Donald Trump and conservatives have misprepresented that mail-in voting leads to voter fraud. That assertion has not been substantiated. For example, the Washington Post found states with all-mail elections — Colorado, Oregon and Washington— had only 372 potential irregularities of 14.6 million votes, meaning just or 0.0025%. According to a recent study by Prof. Nicolas Berlibski and others, unsubstantiated claims of voter fraud can negatively affect public confidence in elections. The false claims can significantly undermine the faith of voters, Republican or Democratic, in the electoral process, even when the misinformation is disproved by fact-checks.  

In the study, ProPublica and FirstDraft found numerous posts on Facebook that contained misinformation about mail-in ballots. The study concluded: "Of the top 50 posts, ranked by total interactions, that mentioned voting by mail since April 1, 2020 [to July 2020] contained false or substantially misleading claims about voting, particularly about mail-in ballots." ProPublica identified the popular Facebook posts by using engagement data from CrowdTangle.

Facebook’s community standards state that no one shall post content that contains “[m]isrepresentation of the . . . methods for voting or voter registration or census participation.” Facebook CEO Mark Zuckerberg recently said on his Facebook page in June 2020 that he stands against “anything that incites violence or suppresses voting,” and that the company is “running the largest voting information campaign in American history . . . to connect people with authoritative information about the elections . . . crack down on voter suppression, and fight hate speech.” Facebook reportedly removed more than 100,000 posts from Facebook and Instagram that violated the company's community standard against voter suppression from March to May 2020. As ProPublica reported, California Secretary of State Alex Padilla stated that "Facebook has removed more than 90% of false posts referred to it by VoteSure, a 2018 initiative by the state of California to educate voters and flag misinformation."

However, according to the joint project by ProPublica and First Draft, Facebook is still falling well short in the efforts to stop election misinformation. Facebook fails to take down posts from individual accounts and group pages that contain false claims about mail-in ballots and voter fraud, including some portraying "people of color as the face of voter fraud." 

Facebook is reported to be considering banning political ads in the days before the election, but that hardly touches the core of the truly rampant fraud— misinformation of the public about mail ballots. False claims are far more widespread in posts than ads, according to the ProPublica and FirstDraft study.

--written by Yucheng “Quentin” Cui

Revisiting Reddit's Attempt to Stop "Secondary Infecktion" Misinformation Campaign from Russia

 

Last year, Reddit announced that it banned 61 accounts in relation to a disinformation campaign dubbed “Secondary Infecktion” led by a Russian group.  The campaign group was exposed by Facebook earlier in June 2019 for creating fake news in multiple languages that involved multiple nations, aiming to “divide, discredit, and distract Western countries” through dissemination of fake information such as assassination plans, attacks on Ukraine and its pro-Western government, and disputes between Germany and US. This time, the operation created fake accounts and uploaded “leaked UK documents” on Reddit. A research firm, Graphika Labs inspected the associated accounts and concluded these were linked to Secondary Infecktion based on same grammatical errors and language patterns.

Reddit’s investigation into suspicious accounts started with its users’ report on questionable posts. It then worked with Graphika and soon found a “pattern of coordination” similar to those reported accounts linked to Secondary Infektion, making it “use these accounts to identify additional suspect accounts that were part of the campaign on Reddit.”

As Reddit’s statement wrote, it “encourage[s] users, moderators, and 3rd parties to report things to us as soon as they see them.” This statement reflects how much Reddit depends on its community to help moderate the site. Reddit is a platform that is heavily community-based. It is a collection of forums where users share content and comments on just about anything. To the left of every post, there are two buttons – the upvote and the downvote, which allow users themselves to rate content. The total score of a post is essentially the number of upvotes minus downvotes, making the content’s position on a page based on its score rank.  Basically, a higher score means more visibility.

The voting system is liked by many users because unlike Facebook or Twitter, Reddit is more like a community curated by its users themselves. However, the voting system has its drawbacks. The system can be gamed and manipulated. First, because everyone has certain moderating power, personal beliefs and agendas may get in the way. For example, a person may create several accounts just to downvote a post with which he does not agree. As a result, information may be downgraded by gaming the systemt. Second, there's a risk of content manipulation by coordinated attacks. As the June security report stated, Reddit has been heavily focused on content manipulation around the 2020 elections and ensure minorities’ voices would be heard. Therefore, Reddit worked much on bot detection and malicious viruses. Admins have vast powers, including flagging fake accounts, and can try to ensure diversity of viewpoints and participation.

Reddit could consider changing some of its platform features. As some redditors pointed out, Reddit's “gilding” feature, which is akin to a “super-upvote,” may enable manipulation. Users can gild posts with their Gold Reddit subscription or just buy Reddit coins.  Together with the voting system, gilding may make content manipulation more easy. A malicious operation can just buy coins to promote content as they wish even without creating fake accounts. Offering subscriptions is apparently Reddit’s way to cover its cost and to profit, and subscriptions do offer other privileges such as having an ad-free experience. Nonetheless, if Reddit wants to stop content manipulation, perhaps the company needs to rethink the gilding power. 

--written by Candice Wang

 

Facebook Removes Hundreds of Accounts, Pages for Violating for Foreign Interference and Coordinated Inauthentic Behavior Policy

 

from Facebook's policy

Facebook recently reported that it removed various networks, accounts, and pages from its Facebook and Instagram platforms for violations of its foreign interference policy.

Facebook defines “foreign interference” as “coordinated inauthentic behavior on behalf of a foreign or governmental entity.” Thus, removals resulting from a violation of the foreign interference policy are based on user behavior – not content. The removed networks originated in Canada and Ecuador, Brazil, Ukraine, and the United States.

According to Nathanial Gleicher, Facebook’s Head of Security Policy, these networks involved “coordinated inauthentic behavior” (CIB). This means individuals within each network coordinated with each other through fake accounts to mislead people about who they were and what they were doing. The network removals resulted from the focus on domestic audiences and associations with commercial entities, political campaigns, and political offices.

As outlined in Facebook's report, the Canada and Ecuador network focused its activities on Argentina, Ecuador, El Salvador, Chile, Uruguay, and Venezuela.  Individual accounts and pages in this network centered on elections, taking part in local debates on both sides. Some individuals would create fake accounts, posing as locals of the countries they targeted; others posed as “independent” news platforms in the countries they targeted. This network alone had 41 accounts and 77 pages on Facebook, and another 56 Instagram accounts; 274,000 followers on one or more of the 77 Facebook pages and 78,000 followers on Instagram; spent $1.38 billion on Facebook advertising.

The Brazil network spawned 35 Facebook accounts, 14 Facebook pages, 1 Facebook group, and 38 Instagram accounts. The Brazil network’s efforts used a hoard of fake and duplicate accounts – some posing as reporters, others posting fictitious news articles, and pages alleging to be news sources. This network collected nearly 883,000 followers, 350 group followers, 917,000 followers on their Instagram accounts, and it also spent $1500 on Facebook advertising.

The Ukraine network created 72 fake Facebook accounts, 35 pages, and 13 Instagram accounts. According to Facebook, this account was most active during the 2019 parliamentary and presidential elections in Ukraine. Nearly 766,000 followed one or more of this network’s fake pages, and 3,800 people followed at least one of the Instagram accounts.

The United States network possessed 54 Facebook accounts, 50 pages, and 4 Instagram accounts. Individuals in this network posed as residents of Florida – posting and commenting on their own content to make it appear more popular. Several of the network’s pages had ties to a hate group banned by Facebook in 2018. According to Facebook, this network was most active between 2015-2017. This network gained 260,000 followers on at least one of its Facebook pages and nearly 61,500 followers on Instagram. The network also spent nearly $308,000 on Facebook advertising.

In the past year alone, Facebook has removed nearly two million fake accounts and dismantled 18 coordinated public manipulation networks. Authentic decision making about voting is the cornerstone of democracy. Every twenty minutes, one million links are shared, twenty million friend requests are sent, and three million messages are sent. Despite Facebook’s efforts, it’s likely we will encounter foreign interference in one way or another online. So, each of us must take steps to protect ourselves from fake accounts and foreign manipulation.

--written by Alison Hedrick

Facebook's Oversight Board for content moderation--too little, too late to combat interference in 2020 election

Facebook for has been under fire over the spread of misinformation connected with Russian involvement in the 2016 U.S. presidential election. In April 2018, the idea for an independent oversight board was discussed when CEO Mark Zuckerberg testified before Congress.

Book Preview: David Shimer’s "Rigged: America, Russia and 100 Years of Covert Electoral Interference."

 

Rigged by David Shimer presents a historical account and comprehensive analysis of how American and Russian covert electoral interference has changed over the last three decades. American operations have transformed from covert operations to public aid through non-profit organizations. Russians, however, remain committed to covert strategies – using tactics like cyber-hacking, trolling, and developing “fake news” posts on popular social media platforms, as reported by Luke Harding in a book review for the Guardian. To prepare for the book, David Shimer interviewed over 130 officials, from a former KGB general to eight previous CIA directors, and scoured archives across six countries.

Shimer defines “covert electoral interference” by three elements:

  1. Covert – “non-attributable,” e.g. “the hand of the actor is hidden,”
  2. Electoral – “targeting a democratic vote of succession,” e.g., casting ballots for a candidate, and
  3. Interference – “deploying active measures” to achieve the result.

Accordingly, in Rigged, “covert electoral interference” identifies “a concealed foreign effort to manipulate a democratic vote of succession.”

Rigged highlights American and Russian covert electoral interference operations worldwide

Shimer explains the origin of American and Russian interference in elections of third countries culminated during the Cold War. Their reasons were obvious: Americans wanted to keep the communist Russians out; Russians wanted them in. In 1947, for the first time, the CIA acted to ensure the success of pro-democratic leaders. The first operation took place in Italy, where the CIA supplied the Christian Democratic party with money, had Italian Americans write letters home, and worked closely with the Church. Three years later, the CIA conducted similar operations in Guatemala, Iran. The KGB simultaneously directed money to communist-friendly leaders in Latin America and African nations. Alternately, Russian electoral interference began in 1919, twenty-nine years before the United States’ operations, as Shimer explained to CBS.Russia first tried interfering in American elections in the 1960s and 70s, targeting Richard Nixon and Ronald Reagan. Those attempts failed. In Rigged, Shimer highlights Russian interference in the 1960, 1968, 1976, 1984, and 2016 elections.

Rigged reveals additional details about Russian interference in 2016 US presidential election

President Obama knew there was undeniable evidence showing Russia was interfering in the 2016 election but did nothing to stop it. Whereas many believe voting tallies were not affected, some, like Harry Reid, say there is no doubt Russian interference included vote tampering.

In Rigged, Shimer argues that Putin succeeded where all other Russian leaders had failed – he has successfully divided American society and placed a Russian-friendly leader in the White House. One goal of Rigged is to dispel the myth that the 2016 election was a new phenomenon. It wasn’t. Russia’s interference in the 2016 election was a continuation of its tried and true covert operations through exploiting new technologies. Social media platforms enabled Russia to use its disruptive tactic with ease.

Shimer is certain of one thing: Russia influenced the minds of over 100 million American voters leading up to the 2016 presidential election.

Russia is targeting American minds

According to Shimer, Russia seeks to delegitimize the American presidential election by casting doubt on the fairness of the result. The goal is to undermine the American democratic system, not necessarily support one candidate over another. During the CBS interview, Shimer reminded us: “Because what Putin's after here, is chaos, is dysfunction, is corrupting democracies. Trump is a means to that end. But there are other ways of achieving it, one of which is just making Americans wonder whether their election was fair at all.”

In Rigged, Shimer argues Putin’s chief goal is to change the landscape of American politics – to elect leaders who will degrade American democracy and remove Americans’ faith in the electoral process – something we need to resist.

-written by Allison Hedrick

Infodemic: The Spread of Misinformation Regarding the COVID-19 Pandemic, Why it Matters, and How it is Being Handled

As communities all over the world continue to adjust their day-to-day lives surrounding the COVID-19 pandemic, we are also battling another pandemic – the spread of misinformation about COVID-19. Since the beginning of the pandemic, what scientists know about the virus has continuously changed. Though this evolution is common in science, it is fostering an environment of uncertainty and people are having a hard time deciphering what is accurate or true. Social media platforms such as Facebook and WhatsApp are being criticized for allowing the spread of misinformation. But if lies are spread around the internet daily, what makes this misinformation so different? Phil Howard, director of the Oxford Internet Institute explained the difference is this "infodemic" or spread of COVID misinformation “can kill people if they don’t understand what precautions to take.” 

COVID Misinformation

With the increased unemployment and limited mobility, people are spending time home and on the internet more than ever. More time on the internet translates to more information consumption on various topics, COVID-19 included. The Pew Research Center conducted a survey in early June on Americans’ consumption of information through social media platforms. They found that 38% of Americans have found it increasingly more difficult to identify accurate information about the pandemic. 71% of Americans say they have heard at least one conspiracy theory about the pandemic and how it was planned by people in power. 1/3 of those people even believe there is some truth to the conspiracies they have heard. This survey sheds light not only on the increasing confusion Americans are facing, but also how they are believing conspiracies fueled by distrust in the government. Researchers believe that this may be a digital literacy issue. People use the internet, but are not taught in schools and workplaces how to navigate it.

Lack of Legal Remedies

The spread of misinformation or “fake news” is not only increasing but ever changing. The legal remedies available for COVID misinoformation are quite limited. According to the National Law Review, there are three types of fake news. Type 1 is spoofing, when a content provider copies a real news source that causes consumer confusion. Consumers are tricked into thinking they are receiving information from a legitimate source. Type 2 is poaching, where a content provider intentionally creates a significantly similar publication similar to an established news source. Though not an exact copy, it is similar enough to confuse the news consumer. Both spoofing and poaching potentially violate trademark laws and other laws; remedies can be sought in federal court. However, many times the owners of these sites are hard to locate and are in foreign countries, thus making it a costly endeavor. Lastly, Type 3 is original sensationalism, such as when a content provider creates an original publication with original content but relies on the sensationalism surrounding the topic to disseminate misinformation on the topic. Original sensationalism is the most common type of fake news and is nearly impossible to remedy with legal action. The FDA can bring actions against entities claiming fraudulent therapeutics or cures. But if the misinformation falls outside that parameter, such as the controversy over wearing masks as a preventative measure, the law might not reach such misinformation. Lack of meaningful legal remedies results in greater expectations being placed on social media platforms to take accountability and enforce policies against COVID misinformation, especially when detrimental to health and safety. 

Social Media Platforms Response

Nowadays it is second nature for most people to go to social media platforms to discuss anything from movies and music to politics. The spread of an unprecedented virus is no different. Though social media has been used to share helpful information about the pandemic, appreciation for healthcare workers, and memes to help people cope with what is happening, it has also become a breeding ground for misinformation and people have been pushing to hold social media platforms like Facebook and WhatsApp (also owned by Facebook) accountable. Internet platforms have attempted to combat COVID misinformation, but the challenges of monitoring millions of posts or communications for such misinformation are dauting.

Facebook has over 7 billion users worldwide and is definitely not a stranger to fake news criticism. Facebook has been facing backlash due to American election and political fake news. Similar backlash is happening in relation to COVID-19. A study conducted by the international advocacy group Avaaz in mid-April 2020 found that millions of Facebook users, “are still being put at risk of consuming harmful misinformation on coronavirus at a large scale.” Even taking Facebook’s internal anti-misinformation team into account, “41% of misinformation still remains on the platform without warning labels.” Also, of that misinformation, 65% of the information has been established as false by Facebook’s own fact-checking partners. In response to this study and other critiques, on May 12, 2020, Facebook finally spoke out in a blog post detailing the actions they are taking to limit the spread of misinformation. They stated they have directed over 2 billion users to accurate information from WHO and other health organizations with over 350 million people clicking on the resources. They have also started working with 60 fact-checking organizations that assess content in more than 50 languages. These partnerships have allowed them to display warnings on approximately 40 million COVID-related posts and 95% of users who encounter these posts do not click on the original content.

Data from May 3, 2020 shows there are more than 2 billion users of WhatsApp (which is owned by Facebook) in 180 countries. These users not only utilize the application intimate conversations but also large interest groups, thus making it a widespread platform filled with millions of conversations centered around the pandemic happening daily. About a month into the pandemic lockdown, on April 7, 2020, WhatsApp announced through a blog post that they want to keep the application focused on personal and private conversations rather than mass dissemination of information without thorough review. Therefore, they decided to further limit the number of users and groups a user can forward messages to. WhatsApp states they had limited this previously and they saw a 25% decrease in global messages forwarded. They have also published tips on how to decipher between the truth and fake news as well as partnered with the World Health Organization (WHO) to help connect users with accurate information.

Misinformation regarding the COVID-19 pandemic will continue to be created and spread all across the world. Social media platforms have implemented policies to stop the spread of misinformation, however it remains to be seen if these measures are effective. As COVID-19 surges in the United States and other parts of the world, it is imperative that Internet platforms do their jobs in combatting dangerous COVID misinformation.

-written by Mariam Tabrez

Tik Tok is all the rage, so why did India ban it?

Tiktok is a social medial platform owned by a Chinese firm named Bytedance. The app was first developed in China, but is growing more and more popular especially among teens all over the world for its combination of music, dance and peculiar humor through creating and sharing short videos. Another popular feature is live-streaming, which grants real-time interation between the host and the audience. Users do not even need to speak English to become an overnight hit with millions of followers on Tiktok. 

Tiktok has become phenomenal. The idea of producing short clips is not new – Snapchat and Instagram had similar functions too. And creating videos has been around since YouTube. But, with an enormous user base in China, this new contender surpassed other video-sharing sites and gained incredible popularity. Presently, Tiktok has over 500 million active users worldwide. Tiktok's worldwide success as an Internet platform is rare for a Chinese-based company. China’s strict internet restriction is well-known. By putting up firewalls, mainstream Western social media sites, such as Facebook and Twitter, are inaccessible in China. 

India announced the controversial decision to ban TikTok in its borders. Why? As the border clash between China and India escalated, the Indian government recently banned 59 Chinese apps, including Tiktok, citing concerns over activities prejudicial to the sovereignty and integrity of India, according to the New York Times.  Alternative Indian native platforms such as Glance and Roposo are eager to seek new users after Tiktok’s leave, but watchdog groups are concerned that Indian local apps may also be censored and controlled by the government or exploited for political propaganda. While banning Tiktok could also be a token of revenge against China for the border skirmish, the ban could also be viewed as India’s determination in safeguarding its citizens’ data from foreign manipulation.

Taking the cue from India’s decision, the US is considering a ban on Tiktok too. US Secretary of State Mike Pompeo warned the Americans not using the app unless “you want your private information in the hands of the Chinese Communist Party,” indicating the app is secretly sponsoring users’ data to the Chinese government.  

Having a reputation of exercising a tight grip over the internet environment, the Chinese government is frequently accused of privacy breaches. Bytedance, the Chinese firm that owns Tiktok, encountered several challenges as it expanded market worldwide. In February 2020, Bytedance was fined £4.2million by the US Federal Trade Commission for illegally collecting personal information from children under 13 without requiring parent consent. On July 3, 2020, the head of the UK’s Information Commissioner’s Office announced that Tiktok was undergoing a similar investigation regarding protections of children’s personal data as its open message system permit adults to directly contact children and thus subject children to risks such as online solicitations and harassments.

Of course, data breaches in social media are not uncommon in the modern digital age. Facebook was accused multiple times for harvesting users’ private information without their consent. Thus, banning Tiktok in the name of privacy protection sounds extreme since other breaches of data by social media have not resulted in banning an entire platform in a country. 

Some users have expressed a suspicion that the major impetus for the US ban on Tiktok was the significant role that Tiktok played during the BlackLivesMatter rally. In the pandemic era, Tiktok fostered new political expressions. For example, activists who could not march on the street in person, created videos with hashtag #blacklivesmatter to demonstrate cyber solidarity for racial injustice. As CNN reported, users on Tiktok also live-streamed the street protest, documented police assaulting peaceful demonstrators.  Tiktok lowered the barrier of communication, allowing users from all over the globe to share content and exchange ideas. Apart from showing cute dogs, teenagers’ funny dance steps, and other mundane occurrences, Tiktok also entered the political sphere even there is a lack of a number of politicians being active on the site. Despite the alleged privacy and national security concerns, it is one of the fastest and most unfiltered ways for people to spread messages.

“Any kind of public policy response which is premised on grounds of national security needs to emerge from well-defined criteria, which seems to be absent here,” executive director of the Internet Freedom Foundation Mr. Gupta said to the New York Times. Banning may be a quick fix, but if authorities could ban an app in the name of protecting citizens’ data without showing clear evidence supporting the alleged claim or legal authority for such an extreme action, it sets a dangerous precedent that would greatly impair internet freedom. Of course, there remains the tension that popular Western based social media platforms are still banned in China. 

-written by Candice Wang

 

 

 

Pages

Blog Search

Blog Archive

Categories