As communities all over the world continue to adjust their day-to-day lives surrounding the COVID-19 pandemic, we are also battling another pandemic – the spread of misinformation about COVID-19. Since the beginning of the pandemic, what scientists know about the virus has continuously changed. Though this evolution is common in science, it is fostering an environment of uncertainty and people are having a hard time deciphering what is accurate or true. Social media platforms such as Facebook and WhatsApp are being criticized for allowing the spread of misinformation. But if lies are spread around the internet daily, what makes this misinformation so different? Phil Howard, director of the Oxford Internet Institute explained the difference is this "infodemic" or spread of COVID misinformation “can kill people if they don’t understand what precautions to take.”
With the increased unemployment and limited mobility, people are spending time home and on the internet more than ever. More time on the internet translates to more information consumption on various topics, COVID-19 included. The Pew Research Center conducted a survey in early June on Americans’ consumption of information through social media platforms. They found that 38% of Americans have found it increasingly more difficult to identify accurate information about the pandemic. 71% of Americans say they have heard at least one conspiracy theory about the pandemic and how it was planned by people in power. 1/3 of those people even believe there is some truth to the conspiracies they have heard. This survey sheds light not only on the increasing confusion Americans are facing, but also how they are believing conspiracies fueled by distrust in the government. Researchers believe that this may be a digital literacy issue. People use the internet, but are not taught in schools and workplaces how to navigate it.
Lack of Legal Remedies
The spread of misinformation or “fake news” is not only increasing but ever changing. The legal remedies available for COVID misinoformation are quite limited. According to the National Law Review, there are three types of fake news. Type 1 is spoofing, when a content provider copies a real news source that causes consumer confusion. Consumers are tricked into thinking they are receiving information from a legitimate source. Type 2 is poaching, where a content provider intentionally creates a significantly similar publication similar to an established news source. Though not an exact copy, it is similar enough to confuse the news consumer. Both spoofing and poaching potentially violate trademark laws and other laws; remedies can be sought in federal court. However, many times the owners of these sites are hard to locate and are in foreign countries, thus making it a costly endeavor. Lastly, Type 3 is original sensationalism, such as when a content provider creates an original publication with original content but relies on the sensationalism surrounding the topic to disseminate misinformation on the topic. Original sensationalism is the most common type of fake news and is nearly impossible to remedy with legal action. The FDA can bring actions against entities claiming fraudulent therapeutics or cures. But if the misinformation falls outside that parameter, such as the controversy over wearing masks as a preventative measure, the law might not reach such misinformation. Lack of meaningful legal remedies results in greater expectations being placed on social media platforms to take accountability and enforce policies against COVID misinformation, especially when detrimental to health and safety.
Social Media Platforms Response
Nowadays it is second nature for most people to go to social media platforms to discuss anything from movies and music to politics. The spread of an unprecedented virus is no different. Though social media has been used to share helpful information about the pandemic, appreciation for healthcare workers, and memes to help people cope with what is happening, it has also become a breeding ground for misinformation and people have been pushing to hold social media platforms like Facebook and WhatsApp (also owned by Facebook) accountable. Internet platforms have attempted to combat COVID misinformation, but the challenges of monitoring millions of posts or communications for such misinformation are dauting.
Facebook has over 7 billion users worldwide and is definitely not a stranger to fake news criticism. Facebook has been facing backlash due to American election and political fake news. Similar backlash is happening in relation to COVID-19. A study conducted by the international advocacy group Avaaz in mid-April 2020 found that millions of Facebook users, “are still being put at risk of consuming harmful misinformation on coronavirus at a large scale.” Even taking Facebook’s internal anti-misinformation team into account, “41% of misinformation still remains on the platform without warning labels.” Also, of that misinformation, 65% of the information has been established as false by Facebook’s own fact-checking partners. In response to this study and other critiques, on May 12, 2020, Facebook finally spoke out in a blog post detailing the actions they are taking to limit the spread of misinformation. They stated they have directed over 2 billion users to accurate information from WHO and other health organizations with over 350 million people clicking on the resources. They have also started working with 60 fact-checking organizations that assess content in more than 50 languages. These partnerships have allowed them to display warnings on approximately 40 million COVID-related posts and 95% of users who encounter these posts do not click on the original content.
Data from May 3, 2020 shows there are more than 2 billion users of WhatsApp (which is owned by Facebook) in 180 countries. These users not only utilize the application intimate conversations but also large interest groups, thus making it a widespread platform filled with millions of conversations centered around the pandemic happening daily. About a month into the pandemic lockdown, on April 7, 2020, WhatsApp announced through a blog post that they want to keep the application focused on personal and private conversations rather than mass dissemination of information without thorough review. Therefore, they decided to further limit the number of users and groups a user can forward messages to. WhatsApp states they had limited this previously and they saw a 25% decrease in global messages forwarded. They have also published tips on how to decipher between the truth and fake news as well as partnered with the World Health Organization (WHO) to help connect users with accurate information.
Misinformation regarding the COVID-19 pandemic will continue to be created and spread all across the world. Social media platforms have implemented policies to stop the spread of misinformation, however it remains to be seen if these measures are effective. As COVID-19 surges in the United States and other parts of the world, it is imperative that Internet platforms do their jobs in combatting dangerous COVID misinformation.
-written by Mariam Tabrez