The Free Internet Project

Internet platforms

How are Twitter, Facebook, other Internet platforms going to stop 2020 election misinformation in U.S.?

With millions of users within the United States on Facebook and Twitter, Internet platforms are becoming a common source of current event news, information, and outlets for socialization for many Americans. These social media platforms have been criticized for allowing the spread of misinformation regarding political issues and the COVID-19 pandemic. These criticisms began after misinformation spread during the 2016 U.S. presidential election. Many misinformation sources went unchecked, therefore millions of Americans were convinced they were consuming legitimate news sources when they were actually reading “fake news.” It is safe to say that these platforms do not want to repeat those mistakes. Ahead of the 2020 U.S. elections, both Facebook and Twitter have taken various actions to ensure misinformation is not only identified, but either removed or flagged as untrue with links to easily accessible, credible resources for their users.

Facebook’s Plan to Stop Election Misinformation

Facebook has faced the most criticism regarding the spread of misinformation due. Facebook has created a voting information center, similar to a COVID-19 one, that will appear in the Facebook and Instagram menu.

Facebook's Voting Information Center

This hub will target United States users only and will contain election information based on the user’s geographic location. For example, if you live in Orange County, Florida, information on vote-by-mail options and poll locations in that area will be provided. In addition, Facebook plans on adding notations to some posts containing election misinformation on Facebook with a link to verified information in the voting information center. Facebook will have voting alerts which will “communicate election-related updates to users through the platform.” Only official government accounts will have access to these voting alerts. Yet one sore spot appears to remain for Facebook: Facebook doesn't fact-check the posts or ads of politicians because, as CEO Mark Zuckerberg has repeatedly said, Facebook does not want to be the "arbiter of truth." 

Twitter’s Plan to Stop Election Misinformation

Twitter has been in the press recently with their warnings on the bottom of a few of President Trump’s tweets for misinformation and glorification of violence. These warnings are part of Twitter’s community standards and Civic Integrity Policy, which was enacted in May 2020. This policy prohibits the use of “Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes.” Civic processes include “political elections, censuses, and major referenda and ballot initiatives.” Twitter also banned political campaign ads starting in October 2019. Twitter recently released a statement stating its aim is to “empower every eligible person to register to vote,” by providing various resources for them to educate themselves on issues surrounding our society today. Twitter officials stated to Reuters they will be expanding their Civic Integrity Policy to address election misinformation, such as mischaracterizations of mail-in voting. But, as TechCrunch points out, "hyperpartisan content or making broad claims that elections are 'rigged' ... do not currently constitute a civic integrity policy violation, per Twitter’s guidance." Twitter also announce dthat it will identify with a notation state-affiliated media, such as from Russia: 

In addition to these policies, Facebook and Twitter, as well as other information platforms including Microsoft, Pinterest, Verizon Media, LinkedIn, and Wikimedia Foundation have all decided to work closely with U.S. government agencies to make sure the integrity of the election is not jeopardized. These tech companies have specifically discussed how the upcoming political conventions and specific scenarios arising from the election results will be handled. Though no details have been given, this seems to be a promising start to ensuring the internet does more good than bad in relation to politics.

--written by Mariam Tabrez

Infodemic: The Spread of Misinformation Regarding the COVID-19 Pandemic, Why it Matters, and How it is Being Handled

As communities all over the world continue to adjust their day-to-day lives surrounding the COVID-19 pandemic, we are also battling another pandemic – the spread of misinformation about COVID-19. Since the beginning of the pandemic, what scientists know about the virus has continuously changed. Though this evolution is common in science, it is fostering an environment of uncertainty and people are having a hard time deciphering what is accurate or true. Social media platforms such as Facebook and WhatsApp are being criticized for allowing the spread of misinformation. But if lies are spread around the internet daily, what makes this misinformation so different? Phil Howard, director of the Oxford Internet Institute explained the difference is this "infodemic" or spread of COVID misinformation “can kill people if they don’t understand what precautions to take.” 

COVID Misinformation

With the increased unemployment and limited mobility, people are spending time home and on the internet more than ever. More time on the internet translates to more information consumption on various topics, COVID-19 included. The Pew Research Center conducted a survey in early June on Americans’ consumption of information through social media platforms. They found that 38% of Americans have found it increasingly more difficult to identify accurate information about the pandemic. 71% of Americans say they have heard at least one conspiracy theory about the pandemic and how it was planned by people in power. 1/3 of those people even believe there is some truth to the conspiracies they have heard. This survey sheds light not only on the increasing confusion Americans are facing, but also how they are believing conspiracies fueled by distrust in the government. Researchers believe that this may be a digital literacy issue. People use the internet, but are not taught in schools and workplaces how to navigate it.

Lack of Legal Remedies

The spread of misinformation or “fake news” is not only increasing but ever changing. The legal remedies available for COVID misinoformation are quite limited. According to the National Law Review, there are three types of fake news. Type 1 is spoofing, when a content provider copies a real news source that causes consumer confusion. Consumers are tricked into thinking they are receiving information from a legitimate source. Type 2 is poaching, where a content provider intentionally creates a significantly similar publication similar to an established news source. Though not an exact copy, it is similar enough to confuse the news consumer. Both spoofing and poaching potentially violate trademark laws and other laws; remedies can be sought in federal court. However, many times the owners of these sites are hard to locate and are in foreign countries, thus making it a costly endeavor. Lastly, Type 3 is original sensationalism, such as when a content provider creates an original publication with original content but relies on the sensationalism surrounding the topic to disseminate misinformation on the topic. Original sensationalism is the most common type of fake news and is nearly impossible to remedy with legal action. The FDA can bring actions against entities claiming fraudulent therapeutics or cures. But if the misinformation falls outside that parameter, such as the controversy over wearing masks as a preventative measure, the law might not reach such misinformation. Lack of meaningful legal remedies results in greater expectations being placed on social media platforms to take accountability and enforce policies against COVID misinformation, especially when detrimental to health and safety. 

Social Media Platforms Response

Nowadays it is second nature for most people to go to social media platforms to discuss anything from movies and music to politics. The spread of an unprecedented virus is no different. Though social media has been used to share helpful information about the pandemic, appreciation for healthcare workers, and memes to help people cope with what is happening, it has also become a breeding ground for misinformation and people have been pushing to hold social media platforms like Facebook and WhatsApp (also owned by Facebook) accountable. Internet platforms have attempted to combat COVID misinformation, but the challenges of monitoring millions of posts or communications for such misinformation are dauting.

Facebook has over 7 billion users worldwide and is definitely not a stranger to fake news criticism. Facebook has been facing backlash due to American election and political fake news. Similar backlash is happening in relation to COVID-19. A study conducted by the international advocacy group Avaaz in mid-April 2020 found that millions of Facebook users, “are still being put at risk of consuming harmful misinformation on coronavirus at a large scale.” Even taking Facebook’s internal anti-misinformation team into account, “41% of misinformation still remains on the platform without warning labels.” Also, of that misinformation, 65% of the information has been established as false by Facebook’s own fact-checking partners. In response to this study and other critiques, on May 12, 2020, Facebook finally spoke out in a blog post detailing the actions they are taking to limit the spread of misinformation. They stated they have directed over 2 billion users to accurate information from WHO and other health organizations with over 350 million people clicking on the resources. They have also started working with 60 fact-checking organizations that assess content in more than 50 languages. These partnerships have allowed them to display warnings on approximately 40 million COVID-related posts and 95% of users who encounter these posts do not click on the original content.

Data from May 3, 2020 shows there are more than 2 billion users of WhatsApp (which is owned by Facebook) in 180 countries. These users not only utilize the application intimate conversations but also large interest groups, thus making it a widespread platform filled with millions of conversations centered around the pandemic happening daily. About a month into the pandemic lockdown, on April 7, 2020, WhatsApp announced through a blog post that they want to keep the application focused on personal and private conversations rather than mass dissemination of information without thorough review. Therefore, they decided to further limit the number of users and groups a user can forward messages to. WhatsApp states they had limited this previously and they saw a 25% decrease in global messages forwarded. They have also published tips on how to decipher between the truth and fake news as well as partnered with the World Health Organization (WHO) to help connect users with accurate information.

Misinformation regarding the COVID-19 pandemic will continue to be created and spread all across the world. Social media platforms have implemented policies to stop the spread of misinformation, however it remains to be seen if these measures are effective. As COVID-19 surges in the United States and other parts of the world, it is imperative that Internet platforms do their jobs in combatting dangerous COVID misinformation.

-written by Mariam Tabrez

Blog Search

Blog Archive

Categories