The Free Internet Project

misinformation

Facebook Removes Hundreds of Accounts, Pages for Violating for Foreign Interference and Coordinated Inauthentic Behavior Policy

 

from Facebook's policy

Facebook recently reported that it removed various networks, accounts, and pages from its Facebook and Instagram platforms for violations of its foreign interference policy.

Facebook defines “foreign interference” as “coordinated inauthentic behavior on behalf of a foreign or governmental entity.” Thus, removals resulting from a violation of the foreign interference policy are based on user behavior – not content. The removed networks originated in Canada and Ecuador, Brazil, Ukraine, and the United States.

According to Nathanial Gleicher, Facebook’s Head of Security Policy, these networks involved “coordinated inauthentic behavior” (CIB). This means individuals within each network coordinated with each other through fake accounts to mislead people about who they were and what they were doing. The network removals resulted from the focus on domestic audiences and associations with commercial entities, political campaigns, and political offices.

As outlined in Facebook's report, the Canada and Ecuador network focused its activities on Argentina, Ecuador, El Salvador, Chile, Uruguay, and Venezuela.  Individual accounts and pages in this network centered on elections, taking part in local debates on both sides. Some individuals would create fake accounts, posing as locals of the countries they targeted; others posed as “independent” news platforms in the countries they targeted. This network alone had 41 accounts and 77 pages on Facebook, and another 56 Instagram accounts; 274,000 followers on one or more of the 77 Facebook pages and 78,000 followers on Instagram; spent $1.38 billion on Facebook advertising.

The Brazil network spawned 35 Facebook accounts, 14 Facebook pages, 1 Facebook group, and 38 Instagram accounts. The Brazil network’s efforts used a hoard of fake and duplicate accounts – some posing as reporters, others posting fictitious news articles, and pages alleging to be news sources. This network collected nearly 883,000 followers, 350 group followers, 917,000 followers on their Instagram accounts, and it also spent $1500 on Facebook advertising.

The Ukraine network created 72 fake Facebook accounts, 35 pages, and 13 Instagram accounts. According to Facebook, this account was most active during the 2019 parliamentary and presidential elections in Ukraine. Nearly 766,000 followed one or more of this network’s fake pages, and 3,800 people followed at least one of the Instagram accounts.

The United States network possessed 54 Facebook accounts, 50 pages, and 4 Instagram accounts. Individuals in this network posed as residents of Florida – posting and commenting on their own content to make it appear more popular. Several of the network’s pages had ties to a hate group banned by Facebook in 2018. According to Facebook, this network was most active between 2015-2017. This network gained 260,000 followers on at least one of its Facebook pages and nearly 61,500 followers on Instagram. The network also spent nearly $308,000 on Facebook advertising.

In the past year alone, Facebook has removed nearly two million fake accounts and dismantled 18 coordinated public manipulation networks. Authentic decision making about voting is the cornerstone of democracy. Every twenty minutes, one million links are shared, twenty million friend requests are sent, and three million messages are sent. Despite Facebook’s efforts, it’s likely we will encounter foreign interference in one way or another online. So, each of us must take steps to protect ourselves from fake accounts and foreign manipulation.

--written by Alison Hedrick

Facebook's Oversight Board for content moderation--too little, too late to combat interference in 2020 election

Facebook for has been under fire over the spread of misinformation connected with Russian involvement in the 2016 U.S. presidential election. In April 2018, the idea for an independent oversight board was discussed when CEO Mark Zuckerberg testified before Congress.

Infodemic: The Spread of Misinformation Regarding the COVID-19 Pandemic, Why it Matters, and How it is Being Handled

As communities all over the world continue to adjust their day-to-day lives surrounding the COVID-19 pandemic, we are also battling another pandemic – the spread of misinformation about COVID-19. Since the beginning of the pandemic, what scientists know about the virus has continuously changed. Though this evolution is common in science, it is fostering an environment of uncertainty and people are having a hard time deciphering what is accurate or true. Social media platforms such as Facebook and WhatsApp are being criticized for allowing the spread of misinformation. But if lies are spread around the internet daily, what makes this misinformation so different? Phil Howard, director of the Oxford Internet Institute explained the difference is this "infodemic" or spread of COVID misinformation “can kill people if they don’t understand what precautions to take.” 

COVID Misinformation

With the increased unemployment and limited mobility, people are spending time home and on the internet more than ever. More time on the internet translates to more information consumption on various topics, COVID-19 included. The Pew Research Center conducted a survey in early June on Americans’ consumption of information through social media platforms. They found that 38% of Americans have found it increasingly more difficult to identify accurate information about the pandemic. 71% of Americans say they have heard at least one conspiracy theory about the pandemic and how it was planned by people in power. 1/3 of those people even believe there is some truth to the conspiracies they have heard. This survey sheds light not only on the increasing confusion Americans are facing, but also how they are believing conspiracies fueled by distrust in the government. Researchers believe that this may be a digital literacy issue. People use the internet, but are not taught in schools and workplaces how to navigate it.

Lack of Legal Remedies

The spread of misinformation or “fake news” is not only increasing but ever changing. The legal remedies available for COVID misinoformation are quite limited. According to the National Law Review, there are three types of fake news. Type 1 is spoofing, when a content provider copies a real news source that causes consumer confusion. Consumers are tricked into thinking they are receiving information from a legitimate source. Type 2 is poaching, where a content provider intentionally creates a significantly similar publication similar to an established news source. Though not an exact copy, it is similar enough to confuse the news consumer. Both spoofing and poaching potentially violate trademark laws and other laws; remedies can be sought in federal court. However, many times the owners of these sites are hard to locate and are in foreign countries, thus making it a costly endeavor. Lastly, Type 3 is original sensationalism, such as when a content provider creates an original publication with original content but relies on the sensationalism surrounding the topic to disseminate misinformation on the topic. Original sensationalism is the most common type of fake news and is nearly impossible to remedy with legal action. The FDA can bring actions against entities claiming fraudulent therapeutics or cures. But if the misinformation falls outside that parameter, such as the controversy over wearing masks as a preventative measure, the law might not reach such misinformation. Lack of meaningful legal remedies results in greater expectations being placed on social media platforms to take accountability and enforce policies against COVID misinformation, especially when detrimental to health and safety. 

Social Media Platforms Response

Nowadays it is second nature for most people to go to social media platforms to discuss anything from movies and music to politics. The spread of an unprecedented virus is no different. Though social media has been used to share helpful information about the pandemic, appreciation for healthcare workers, and memes to help people cope with what is happening, it has also become a breeding ground for misinformation and people have been pushing to hold social media platforms like Facebook and WhatsApp (also owned by Facebook) accountable. Internet platforms have attempted to combat COVID misinformation, but the challenges of monitoring millions of posts or communications for such misinformation are dauting.

Facebook has over 7 billion users worldwide and is definitely not a stranger to fake news criticism. Facebook has been facing backlash due to American election and political fake news. Similar backlash is happening in relation to COVID-19. A study conducted by the international advocacy group Avaaz in mid-April 2020 found that millions of Facebook users, “are still being put at risk of consuming harmful misinformation on coronavirus at a large scale.” Even taking Facebook’s internal anti-misinformation team into account, “41% of misinformation still remains on the platform without warning labels.” Also, of that misinformation, 65% of the information has been established as false by Facebook’s own fact-checking partners. In response to this study and other critiques, on May 12, 2020, Facebook finally spoke out in a blog post detailing the actions they are taking to limit the spread of misinformation. They stated they have directed over 2 billion users to accurate information from WHO and other health organizations with over 350 million people clicking on the resources. They have also started working with 60 fact-checking organizations that assess content in more than 50 languages. These partnerships have allowed them to display warnings on approximately 40 million COVID-related posts and 95% of users who encounter these posts do not click on the original content.

Data from May 3, 2020 shows there are more than 2 billion users of WhatsApp (which is owned by Facebook) in 180 countries. These users not only utilize the application intimate conversations but also large interest groups, thus making it a widespread platform filled with millions of conversations centered around the pandemic happening daily. About a month into the pandemic lockdown, on April 7, 2020, WhatsApp announced through a blog post that they want to keep the application focused on personal and private conversations rather than mass dissemination of information without thorough review. Therefore, they decided to further limit the number of users and groups a user can forward messages to. WhatsApp states they had limited this previously and they saw a 25% decrease in global messages forwarded. They have also published tips on how to decipher between the truth and fake news as well as partnered with the World Health Organization (WHO) to help connect users with accurate information.

Misinformation regarding the COVID-19 pandemic will continue to be created and spread all across the world. Social media platforms have implemented policies to stop the spread of misinformation, however it remains to be seen if these measures are effective. As COVID-19 surges in the United States and other parts of the world, it is imperative that Internet platforms do their jobs in combatting dangerous COVID misinformation.

-written by Mariam Tabrez

Blog Search

Blog Archive

Categories