The Free Internet Project

Twitter

How are Twitter, Facebook, other Internet platforms going to stop 2020 election misinformation in U.S.?

With millions of users within the United States on Facebook and Twitter, Internet platforms are becoming a common source of current event news, information, and outlets for socialization for many Americans. These social media platforms have been criticized for allowing the spread of misinformation regarding political issues and the COVID-19 pandemic. These criticisms began after misinformation spread during the 2016 U.S. presidential election. Many misinformation sources went unchecked, therefore millions of Americans were convinced they were consuming legitimate news sources when they were actually reading “fake news.” It is safe to say that these platforms do not want to repeat those mistakes. Ahead of the 2020 U.S. elections, both Facebook and Twitter have taken various actions to ensure misinformation is not only identified, but either removed or flagged as untrue with links to easily accessible, credible resources for their users.

Facebook’s Plan to Stop Election Misinformation

Facebook has faced the most criticism regarding the spread of misinformation due. Facebook has created a voting information center, similar to a COVID-19 one, that will appear in the Facebook and Instagram menu.

Facebook's Voting Information Center

This hub will target United States users only and will contain election information based on the user’s geographic location. For example, if you live in Orange County, Florida, information on vote-by-mail options and poll locations in that area will be provided. In addition, Facebook plans on adding notations to some posts containing election misinformation on Facebook with a link to verified information in the voting information center. Facebook will have voting alerts which will “communicate election-related updates to users through the platform.” Only official government accounts will have access to these voting alerts. Yet one sore spot appears to remain for Facebook: Facebook doesn't fact-check the posts or ads of politicians because, as CEO Mark Zuckerberg has repeatedly said, Facebook does not want to be the "arbiter of truth." 

Twitter’s Plan to Stop Election Misinformation

Twitter has been in the press recently with their warnings on the bottom of a few of President Trump’s tweets for misinformation and glorification of violence. These warnings are part of Twitter’s community standards and Civic Integrity Policy, which was enacted in May 2020. This policy prohibits the use of “Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes.” Civic processes include “political elections, censuses, and major referenda and ballot initiatives.” Twitter also banned political campaign ads starting in October 2019. Twitter recently released a statement stating its aim is to “empower every eligible person to register to vote,” by providing various resources for them to educate themselves on issues surrounding our society today. Twitter officials stated to Reuters they will be expanding their Civic Integrity Policy to address election misinformation, such as mischaracterizations of mail-in voting. But, as TechCrunch points out, "hyperpartisan content or making broad claims that elections are 'rigged' ... do not currently constitute a civic integrity policy violation, per Twitter’s guidance." Twitter also announce dthat it will identify with a notation state-affiliated media, such as from Russia: 

In addition to these policies, Facebook and Twitter, as well as other information platforms including Microsoft, Pinterest, Verizon Media, LinkedIn, and Wikimedia Foundation have all decided to work closely with U.S. government agencies to make sure the integrity of the election is not jeopardized. These tech companies have specifically discussed how the upcoming political conventions and specific scenarios arising from the election results will be handled. Though no details have been given, this seems to be a promising start to ensuring the internet does more good than bad in relation to politics.

--written by Mariam Tabrez

Facebook removes Donald Trump post regarding children "almost immune" for violating rules on COVID misinformation; Twitter temporarily suspends Trump campaign account for same COVID misinformation

On August 5, 2020, as reported by the Wall St. Journal, Facebook removed a post from Donald Trump that contained a video of an interview he did with Fox News in which he reportedly said that children are "almost immune from this disease." Trump also said COVID-19 “is going to go away,” and that “schools should open” because “this it will go away like things go away.” A Facebook spokesperson explained to the Verge: "This video includes false claims that a group of people is immune from COVID-19 which is a violation of our policies around harmful COVID misinformation." 

Twitter temporarily suspended the @TeamTrump campaign account from tweeting because of the same content. "The @TeamTrump Tweet you referenced is in violation of the Twitter Rules on COVID-19 misinformation,” Twitter spokesperson Aly Pavela said in a statement to TechCrunch. “The account owner will be required to remove the Tweet before they can Tweet again.” The Trump campaign resumed tweeting so it appears it complied and removed the tweet. 

Neither Facebook nor Twitter provided much explanation of their decisions on their platforms, at least based on our search. They likely interpreted "almost immune from this disease" as misleading because children of every age can be infected by coronavirus and suffer adverse effects, including death (e.g., 6 year old, 9 year old, and 11 year old). In Florida, 23,170 minors tested positive for coronavirus by July 2020, for example. The CDC just published a study on the spread of coronavirus among children at summer camp in Georgia and found extensive infection spread among the children: 

These findings demonstrate that SARS-CoV-2 spread efficiently in a youth-centric overnight setting, resulting in high attack rates among persons in all age groups, despite efforts by camp officials to implement most recommended strategies to prevent transmission. Asymptomatic infection was common and potentially contributed to undetected transmission, as has been previously reported (1–4). This investigation adds to the body of evidence demonstrating that children of all ages are susceptible to SARS-CoV-2 infection (1–3) and, contrary to early reports (5,6), might play an important role in transmission (7,8). 

Experts around the world are conducting studies to learn more about how COVID-19 affects children.  The Smithsonian Magazine compiles a summary of the some of these studies and is well worth reading.  One of the studies from the Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine did examine the hypothesis: "Decreased susceptibility could result from immune cross-protection from other coronaviruses9,10,11, or from non-specific protection resulting from recent infection by other respiratory viruses12, which children experience more frequently than adults." But the study noted: "Direct evidence for decreased susceptibility to SARS-CoV-2 in children has been mixed, but if true could result in lower transmission in the population overall." This inquiry was undertaken because, thus far, children have reported fewer positive tests than adults. According to the Mayo Clinic Staff: "Children of all ages can become ill with coronavirus disease 2019 (COVID-19). But most kids who are infected typically don't become as sick as adults and some might not show any symptoms at all." Moreover, a study from researchers in Berlin found that children "carried the same viral load, a signal of infectiousness." The Smithsonian Magazine article underscores that experts believe more data and studies are needed to understand how COVID-19 affects children.

Speaking of the Facebook removal, Courtney Parella, a spokesperson for the Trump campaign, said: "The President was stating a fact that children are less susceptible to the coronavirus. Another day, another display of Silicon Valley's flagrant bias against this President, where the rules are only enforced in one direction. Social media companies are not the arbiters of truth."

Cleaning house: Twitter suspends 7,000 accounts of QAnon conspiracy theory supporters

On July 21, 2020, Twitter suspended 7,000 accounts spreading QAnon conspiracy theories. In a tweet about the banning of these QAnon accounts, Twitter reiterated their commitment to taking "strong enforcement actions on behavior that has the potential to lead to offline harm." Twitter identified the QAnon accounts' violations of its community standards against "multi-account[s]," "coordinating abuse around individual victims," and "evad[ing] a previous suspension." In addition to the permanent suspensions, Twitter also felt it necessary to ban content and accounts "associated with Qanon" from the Trends and recommendations on Twitter, as well as to avoid "highlighting this activity in search and conversations." Further, Twitter will block "URLs associated with QAnon from being shared on Twitter." 

These actions by Twitter are a bold step in what has been a highly contentious area concerning the role of social media platforms in moderating hateful or harmful content. Some critics suggested that Twitter's QAnon decision lacked notice and transparency.  Other critics contended that Twitter's actions were too little to stop the "omnisconpiracy theory" that QAnon has become across multiple platforms.

So what exactly is QAnon? CNN describes the origins of QAnon, which began as a single conspiracy theory: QAnon "claim dozens of politicians and A-list celebrities work in tandem with governments around the globe to engage in child sex abuse. Followers also believe there is a 'deep state' effort to annihilate President Donald Trump."  Forbes similarly describes: "Followers of the far-right QAnon conspiracy believe a “deep state” of federal bureaucrats, Democratic politicians and Hollywood celebrities are plotting against President Trump and his supporters while also running an international sex-trafficking ring." In 2019, an internal FBI memo reportedly identified QAnon as a domestic terrorism threat.

Followers of QAnon are also active on Facebook, Reddit, and YouTube. The New York Times reported that Facebook was considering takeingsteps to limit the reach QAnon content had on its platform. Facebook is coordinating with Twitter and other platforms in considering its decision; an announcement is expected in the next month. Facebook has long been criticized for its response, or lack of response, to disinformation being spread on its platform. Facebook is now the subject of a boycott, Stop Hate for Profit, calling for a stop to advertising until steps are taken to halt the spread of disinformation on the social media juggernaut. Facebook continues to allow political ads using these conspiracies on its site. Forbes reports that although Facebook has seemingly tried to take steps to remove pages containing conspiracy theories, a number of pages still remain. Since 2019, Facebook has allowed 144 ads promoting QAnon on its platform, according to Media Matters. Facebook has continuously provided a platform for extremist content; it even allowed white nationalist content until officially banning it in March 2019.

Twitter's crack down on QAnon is a step in the right direction, but it signals how little companies like Twitter and Facebook have done to stop disinformation and pernicious conspiracy theories in the past. As conspiracy theories can undermine effective public health campaigns to stop the spread of the coronavirus and foreign interference can undermine elections, social media companies appear to be playing a game of catch-up.  Social media companies would be well-served by devoting even greater resources to the problem, with more staff and clearer articulation of its policies and enforcement procedures. In the era of holding platforms and individuals accountable for actions that spread hate, social media companies now appear to realize that they have greater responsibilities for what happens on their platforms.

--written by Bisola Oni

The Twitter Hack: What Preliminary Investigations Have Revealed

What happened: Hackers accessed a slew of Twitter accounts to sell; took control of high-profile accounts to Tweet links to a Bitcoin scam.

In a recent blog post, Twitter admitted that its platform was hacked last Wednesday, July 15, 2020. Twitter alleged hackers engaged in a “social engineering scheme” to access its internal tools. Twitter defined “social engineering” as “the intentional manipulation of people into performing certain actions and giving out their personal information.”

Ultimately, hackers accessed 130 Twitter accounts. The hackers were able to reset the password for 45 accounts; they then logged into those accounts and Tweeted out cryptocurrency "bitcoin" scams. The hacking scheme escalated just before 3:30 p.m. on July 15, 2020. According to a New York Times’ investigation, certain cryptocurrency company elites’ accounts began asking for Bitcoin donations to a website called “cryptoforhealth.” The Bitcoin wallet set up to receive the donations was none other than the wallet “Kirk” had been using all day. “Kirk” then started tweeting out links from celebrities’ and tech giants’ accounts which told users to send money to a Bitcoin account and in return, the amount would be doubled.

According to one investigation by Krebs, the Bitcoin account processed 383 transactions; according to NYT, 518 transactions were processed worldwide. It wasn’t until around 6 p.m. when Twitter put a stop to the scam messages. Twitter’s blog post stated: “We’re embarrassed, we’re disappointed, and more than anything, we’re sorry.” Once the hacks were detected, Twitter “secured and revoked access to internal systems,” restricted the functionality of many Twitter accounts – preventing Tweeting and password changes, and locked accounts when there was a recent password change.

What was accessed?

Twitter assured its users for all but the 130 hacked accounts, no personal information was compromised. However, it is likely the hackers saw the users’ personal information, like phone numbers, email addresses. For the 45 accounts that were taken over by the hackers, more information was compromised – but Twitter did not state what information that could be. The hackers downloaded user's information, such as a summary of the user’s activity and account details, for eight accounts. It is unclear which eight accounts were affected at this time.

Investigators are trying to identify the hackers – foreign state interference is not suspected.

Investigators are trying to figure out if a Twitter employee was involved or whether, as Twitter claimed, the hacking was orchestrated by social engineering, where one individual posed as a trusted employee to gain credentials and account access. The Federal Bureau of Investigation said, "the accounts appear to have been compromised in order to perpetuate cryptocurrency fraud.”  U.S. senators have demanded Twitter submit a brief by July 23, 2020. New York Governor Andrew Cuomo announced the state will conduct a full investigation.  

According to an exclusive New York Times interview with four of the culprits, the organized hacking scheme was not politically motivated, despite targeting some political and corporate elites. The New York Times verified the hackers’ identities – “lol,” “ever so anxious,” and two others – through matching their social media and cryptocurrency accounts. The hackers also provided photos of their chat logs. Another source Krebs identified another key player in the Twitter Hack “PlugWalkJoe.” Investigators have confirmed some of the information relayed to the New York Times’ exclusive interview. “lol” is a 20-something, living on the United States’ West Coast. “ever so anxious” is 19, living with his mother in the South of England. Both are well-known gamers on OGusers.com. “PlugWalkJoe,” whose name is Joseph O’Connor, is 21, British, and was in Spain when the Twitter hack scheme started. Mr. O’Connor insists he played no part in Wednesday’s events. Alternatively, “Kirk” was unknown before Wednesday’s Twitter Hack – and his real identity is still under investigation.

The scheme began with messages the previous Tuesday night between two hackers, “Kirk” and “lol.” “Kirk” reached out to “lol,” alleging he worked at Twitter and demonstrated he could take control of valuable Twitter accounts. The hackers claim they were not part of a foreign interference plot – they are a bunch of young people, one still living with his mom – obsessed with owning early or unusual user names having one letter or number, such as @y or @6.  But “lol” told the New York Times he suspected “Kirk” did not work at Twitter because he was “too willing to damage the company.”

Regardless, “Kirk” could take control of almost any Twitter account, including former President Obama, former Vice President and the Democratic presidential nominee, Joseph R. Biden, Elon Musk,  and other celebrities. The BBC reported that other elites’ accounts were hacked too, like Bill Gates, Kim Kardashian, Kanye West, Apple, and Uber. Another source Krebs adds Jeff Bezos, former New York Mayor Michael Bloomberg, and Warren Buffett to the list.

Prestige is King – Four hackers were inspired by an obsession with “OG user names.”

According to the hackers, “Kirk” directed the group’s efforts. However, two hackers, “lol” and “ever so anxious,” told the New York Times they sought the prestige of owning an original user name. The two claim they only helped “Kirk” by facilitating the purchases and takeovers of OG, or “original gangster,” user names earlier Wednesday. In their interview, the four hackers insisted they parted ways with “Kirk” before he started taking over higher-profile accounts. In the online gaming world, certain user names associated with the launch of a new online platform – so-called OG user names – are highly desired. These prestigious user names are snagged by the earliest users of the new platform. Many latecomers to the platform want the credibility of the OG user names, and will often pay big bucks to get one.

Wednesday’s hacking scheme began with a plan to commandeer and sell OG user names. “Kirk” asked “lol” and “ever so anxious” to act as middlemen for the sale of some Twitter OG user names. “Kirk” promised the other two would get a cut of each transaction they secured. For example, the first “deal” “lol” brokered included a person offering $1500 in Bitcoin for the “@y” user name. The group posted an advertisement on OGusers.com and customers poured in. The group sold user names like @b, @dark, @l, @R9, @vague, @w, @6, and @50. One buyer, and possible culprit, “PlugWalkJoe,” bought the “@6” user name from “ever so anxious,” while “ever so anxious” commandeered the user name “@anxious.” Nearly all the transactions that occurred in relation to the Twitter Hack went into one Bitcoin wallet, predominately used by “Kirk” throughout the day.

Election Day 2020 Concerns

Because high-profile politicians’ accounts were compromised in Wednesday’s Twitter Hack, many express concerns about potential disinformation campaigns closer to November 3rd. These concerns are exacerbated by the fact Twitter did not detect the hacking scheme for hours after the hacks started. While U.S. and state government officials have sought to protect voting systems against potential hacking, Wednesday’s chaos has shown us that efforts to protect the security of the upcoming presidential election might need renewed attention. The investigations into the Twitter Hack are still ongoing, and many details remain unclear.

written by Allison Hedrick

Why Voters Should Beware: Lessons from Russian Interference in 2016 Election, Political and Racial Polarization on Social Media

Overview of the Russian Interference Issue

The United States prides itself on having an open democracy, with free and fair elections decided by American voters. If Americans want a policy change, then the remedy most commonly called upon is political participation--and the vote. If Americans want change, then they should vote out the problematic politicians and choose public officials to carry out the right policies. However, what if the U.S. voting system is skewed by foreign interference? 

American officials are nearly unanimous in concluding, based on U.S. intelligence, that Russia interfered with the 2016 presidential elections [see, e.g., here; here; and Senate Intelligence Report].  “[U]ndermining confidence in America’s democratic institutions” is what Russia seeks. In 2016, few in the U.S. were even thinking about this type of interference. The US’s guard was down. Russia interfered with the election in various ways including fake campaign advertisements, bots on Twitter and Facebook that pumped out emotionally and politically charged content, and through spread of disinformation or “fake news.” Social media hacking, as opposed to physical-polling-center hacking, is at the forefront of discussion because it can not only change who is in office, but it also can shift American voters’ political beliefs and understanding of political topics or depress voters from voting. 

And, if you think Russia is taking a break this election cycle, you'd be wrong. According to a March 10, 2020 New York Times article, David Porter of the FBI Foreign Influence Task Force says: "We see Russia is willing to conduct more brazen and disruptive influence operations because of how it perceives its conflict with the West."

What Inteference Has to Do with Political Polarization

Facebook and Twitter have been criticized countless times by various organizations, politicians, and the media for facilitating political polarization. The U.S. political system of mainly two dominamnt parties is especially susceptible to political polarization. Individuals belonging to either party become so invested in those party’s beliefs that they do not just see the other party’s members as different but also wrong and detrimental to the future of the country. In the past twenty years, the amount of people who consistently hold conservative views or liberal views went from 10% to 20%, thus showing the increasing division, according to an article in Greater Good Magazine.

Political polarization is facilitated by platforms like Facebook and Twitter because of their content algorithms, which are designed to make the website experience more enjoyable. The Facebook News Feed “ranks stories based on a variety of factors including their history of clicking on links for particular websites,” as described by a Brookings article. Under the algorithm, if a liberal user frequently clicks on liberally skewed content, that is what they are going to see the most. Research shows this algorithm reduced the cross-cutting of political “content by 5 percent for conservatives and 8 percent for liberals.” Thus, the algorithm limits your view of other opinions.

So, you might ask, “Why is that bad? I want to see content more aligned with my beliefs.” Democracy is built on the exchange of varying political views and dissenting opinions. The US has long stood by the reputation of freedom of speech and encouraging a free flow of ideas. This algorithmic grouping of like-minded people can be useful when it comes to hobbies and interests, however when it comes to consistently grouping individuals based on political beliefs, it can have a negative impact on democracy. This grouping causes American users to live in “filter bubbles” that only expose them to content that aligns with their viewpoints. Users tend to find this grouping enjoyable due to the psychological theory of confirmation bias, which means that individuals are more likely to consume content that aligns with their pre-existing beliefs. So, all the articles about Trump successfully leading the country will be the first ranked on a conservative user’s Facebook newsfeed and will also be the most enjoyable for them. This filter bubble is dangerous to a democratic system because the lack of diverse perspectives when consuming news content encourages close-mindedness and increases distrust in anyone who disagrees.

During the 2016 presidential election, the Russian hackers put out various types of fake articles, campaign advertisements, and social media posts that were politically charged on either the liberal or conservative side. Because the Facebook algorithm shows more conservative content to conservatives and same for liberals, hackers had no problem reaching their desired audience quickly and effectively. On Facebook they created thousands of robot computer programs that would enter various interest groups and engage with their target audience. For example, in 2016, a Russian soldier successfully entered a U.S. Facebook group pretending to be a 42-year-old housewife, as reported by Time. He responded to political issues discussed on that group and he used emotional and political buzz words when bringing up political issues and stories. On Twitter, thousands of fake accounts run by Russians and computer robots were used to spread disinformation about Hillary Clinton by continuously mentioning her email scandal from when she was Secretary of State and a fake Democratic pedophilic ring called “Pizzagate.” These robots would spew hashtags like “#MAGA” and “#CrookedHillary” that took up more than a quarter of the content within these hashtags.

Facebook and Twitter’s Response to the 2016 Russian Interference

According to a Wall Street Journal article on May 26, 2020 and a Washington Post article on June 28, 2020, Facebook had an internal review of how Facebook could reduce polarization on its platform following the 2016 election, but CEO Mark Zuckerberg and other executives decided against the recommended changes because it was seen as "paternalistic" and would potentially affect conservatives on Facebook more. 

After becoming under increasing fire from critics for allowing misinformation and hate speech to go unchecked on Facebook, the company announced some changes to "fight polarization" on May 27, 2020. This initiative included a recalibration of each user’s Facebook News Feed which would prioritize their family and friends’ content over divisive news content. Their reasoning was that data shows people are more likely to have meaningful discourse with people they know, and this would foster healthy debate rather than ineffective, one-off conversations. They also mentioned a policy directly targeting the spread of disinformation on the platform. They say they have implemented an independent-fact-checking program that will automatically check content in over 50 languages around the world for false information.  Disinformation that will potentially contribute to “imminent violence, physical harm, and voter suppression,” will be removed. 

But those modest changes weren't enough to mollify Facebook's critics. Amidst the mass nationwide protests of the Minneapolis police officer Derek Chauvin's brutal killing of George Floyd, nonprofit organizations including Color for Change organized an ad boycott against Facebook. Over 130 companies agreed to remove their ads from Facebook during July or longer. That led Zuckerberg to change his position on exempting politicians from fact checking or the company's general policy on misinformation. Zuckerberg said that politicians would now be subject to the same policy as every other Facebook user and would be flagged if they disseminated misinformation (or hate speech) that violates Facebook's general policy. 

Twitter’s CEO Jack Dorsey not only implemented a fact-checking policy similar to Facebook, but also admitted that the company needed to be more transparent in their policy making. The fact checking policy “attached fact-checking notices” at the bottom of various tweets alerting users that there could be fake claims in those tweets.  Twitter also decided to forbig all political advertising on its platform. In response to Twitter's flagging of his content, President Trump issued an executive order to increase social media platform regulation and stop them from deleting users’ content and censoring their speech.

With the 2020 U.S. election only four months away, Internet companies are still figuring out how to stop Russian interference and the spread of misinformation, hate speech, and political polarization intended to interfere with the election. Whether Internet companies succeed remains to be seen.  But there's been more policy changes and decisions by Facebook, Twitter, Reddit, Snapchat, Twitch, and other platforms in the last month than all of last year. 

-by Mariam Tabrez

Over 130 Companies Remove Ads from Facebook in #StopHateforProfit Boycott, forcing Mark Zuckerberg to change lax Facebook policy on misinformation and hate content

In the aftermath of Cambridge Analytica scandal in which the company exploited Facebook to target and manipulate swing voters in the 2016 U.S. election, Facebook did an internal review to examine the company's role in spreading misinformation and fake news that may have affected the election, as CEO Mark Zuckerberg announced. In 2018, Zuckerberg announced that Facebook was making changes to be better prepared to stop misinformation in the 2020 election. Critics criticized the changes as modest, however. As WSJ reporters Jeff Horwitz and Deepa Seetharaman detailed, Facebook executives largely rejected the internal study's recommendations to reduce polarization on Facebook. Doing so might be "paternalistic" and might open Facebook up to criticisms of being biased against conservatives.

Despite the concerns about fake news and misinformation affecting the 2020 election, Facebook took the position that fact checking for misinformation did not apply to the posts and ads by politicians in the same way as they applied to everyone else. Facebook's policy was even more permissive to political ads and politicians. As shown below, Facebook justified this hands-off position as advancing political speech: "Our approach is grounded in Facebook's fundamental belief in free expression, respect for the democratic process, and the belief that, especially in mature democracies with a free press, political speech is the most scrutinized speech there is. Just as critically, by limiting political speech we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words."

Facebook's Fact-Checking Exception for Politicians and Political Ads

By contrast, Twitter CEO Jack Dorsey decided to ban political ads in 2019 and to monitor the content politicians just as Twitter does with all other users for misinformation and other violations of Twitter's policy.  Yet Zuckerberg persisted in his "hands off" approach: "“I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online." Zuckerberg even said Twitter was wrong to add warnings to two of President Trump's tweets as misleading (regarding mail-in ballots) and glorifying violence (Trump said, "When the looting starts, the shooting starts" regarding the protests of the Minneapolis police Derek Chauvin killing of George Floyd)  Back in October 2019, Zuckerberg defended his approach in the face of withering questioning by Rep. Alexandria Ocasio-Cortez. 

 

In May and June 2020, Zuckerberg persisted in his "hands off" approach. Some Facebook employees quit in protest, while others staged a walkout.  Yet Zuckerberg still persisted. 

On June 17, 2020, Color of Change, which is "the nation’s largest online racial justice organization," organized with NAACP, Anti-Defamation League, Sleeping Giants, Free Press, and Common Sense Media a boycott of advertising on Facebook for the month of July. The boycott was labeled #StopHateforProfit. Within just 10 days, over 130 companies joined the ad boycott of Facebook.  It included many large companies, such as Ben and Jerry's, Coca-Cola, Dockers, Eddie Bauer, Levi's, The North Face, REI, Unilver, and Verizon. 

On June 26, 2020, Zuckerberg finally announced some changes to Facebook's policy.  The biggest changes:

(1) Moderating hateful content in ads. As Zuckerberg explained on his Facebook page, "We already restrict certain types of content in ads that we allow in regular posts, but we want to do more to prohibit the kind of divisive and inflammatory language that has been used to sow discord. So today we're prohibiting a wider category of hateful content in ads. Specifically, we're expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others. We're also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them."

(2) Adding labels to posts, including from candidates, that may violate Facebook's policy. As Zuckerberg explained, "Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

"We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society -- but we'll add a prompt to tell people that the content they're sharing may violate our policies.

"To clarify one point: there is no newsworthiness exemption to content that incites violence or suppresses voting. Even if a politician or government official says it, if we determine that content may lead to violence or deprive people of their right to vote, we will take that content down. Similarly, there are no exceptions for politicians in any of the policies I'm announcing here today." 

Facebook's new labeling of content of candidates sounds very similar to what Zuckerberg criticized Twitter as being wrong. And Facebook's new policy on moderating hateful content in ads that "are a threat to the physical safety, health or survival of others," including "people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status," seems a positive step to prevent Facebook being a platform to sow racial discord, which is a goal of Russian operatives according to U.S. intelligence. 

Facebook new policy on moderation of political ads and posts by politicians and others

The organizers of the boycott, however, were not impressed with Facebook's changes. They issued a statement quoted by NPR: "None of this will be vetted or verified — or make a dent in the problem on the largest social media platform on the planet. We have been down this road before with Facebook. They have made apologies in the past. They have taken meager steps after each catastrophe where their platform played a part. But this has to end now."

 

New study by Alto Data Analytics casts doubt on effectiveness of fact checking to combat published fake news

As “fake news” continues to plague digital socio-political space, a new form of investigative reporter has risen to combat this disinformation: the fact-checker. Generally, fact-checkers are defined as journalists working to verify digital content by performing additional research on the content’s claim. Whenever a fact-checker uncovers a falsity masquerading as fact (aka fake news), they rebut this deceptive representation through articles, blog posts, and other explanatory comments that illustrate how the statement misleads the public. [More from Reuters] As of 2019, the number of fact-checking outlets across the globe has grown to 188 across 60 countries, according to the Reporters Lab.  

But recent research reveals that this upsurge in fact-checkers may not have that great an impact on defeating digital disinformation. From December 2018 to the European Parliamentary elections in May 2019, big-data firm Alto Data Analytics collected socio-political debate data from across a variety of digital media platforms. This survey served as one of the first studies assessing the success of fact-checking efforts.  Alto’s study examined five European countries: France, Germany, Italy, Poland, and Spain. Focusing on verified fact-checkers in each of these countries, Alto’s Alto Analyzer cloud-based software tracked how users interacted with these trustworthy entities in digital space. Basing their experiment exclusively on Twitter interactions, the Analyzer platform recorded how users interacted with the fact-checkers’ tweets through re-tweets, replies, and mentions. After noting this information, the data-scientists calculated the fact-checkers’ effectiveness in reaching communities most affected by disinformation.

Despite its limitation to 5 select countries, the study yielded discouraging results. In total, the fact-checking outlets in these countries only amounted to between 0.1% and 0.3% of total number of Twitter activity during the period.  Across the five countries in the study, fact-checkers penetrated least successfully in Germany, followed closely by Italy. Conversely, fact-checkers experienced the greatest distributive effect in France. Fact-checkers’ digital presence tended to reach only a few online communities.  The study found that “fact-checkers . . . [were] unable to significantly penetrate the communities which tend to be exposed most frequently to disinformation content.”  In other words, fact-checking efforts reached few individuals, and the ones they did reach were other fact-checkers.  Alto Data notes, however, that their analysis “doesn’t show that the fact-checkers are not effective in the broader socio-political conversation.” But “the reach of fact-checkers is limited, often to those digital communities which are not targets for or are propagating disinformation.”  [Alto Data study]

Alto proposed ideas for future research models on this topic: expanding the study beyond one social media site; conducting research to find effectual discrepancies between various types of digital content—memes, videos, picture, and articles; taking search engine comparisons into account; and providing causal explanations for penetration differences between countries.

Research studies in the United States have also produced results doubting the effectiveness of fact-checkers. A Tulane University study discovered that citizens were more likely to alter their views from reading ideologically consistent media outlets than neutral fact-checking entities. Some studies even suggest that encounters with corrective fact-checking pieces have undesired psychological effects on content consumers, hardening individuals’ partisan positions and perceptions instead of dispelling them. 

These studies suggest that it's incredibly difficult to "unring the bell" of fake news, so to speak.  That is why the proactive efforts of social media companies and online sites to minimize the spread of blatantly fake news related to elections may be the only hope of minimizing its deleterious effects on voters.  

Should tech companies do more for election security?: hard lessons from Russian social media warfare in 2016 U.S. elections

Bill Gates, founder of Microsoft, joined the growing number of high-profile individuals demanding that the U.S. government step up its regulation of big tech companies. In a June 2019 interview at the Economic Club of Washington, DC, Gates said, “Technology has become so central that governments have to think: What does that mean about elections?” Gates focused on the need to reform user privacy rights and data security.

This concern comes following the details of a Russian-led social media campaign to “sow discord in the U.S. political system through what it termed ‘information warfare’” outlined in Volume I Section II of the Mueller Report.  According to the Mueller Report, a Russian-based organization, known as the Internet Research Agency (IRA), “carried out a social media campaign that favored presidential candidate Donald J. Trump and disparaged presidential candidate Hillary Clinton.” As early as 2014, IRA employees traveled to the United States on intelligence-gathering missions to obtain information and photographs for use in their social media posts. After returning to St. Petersburg, IRA agents began creating and operating social media accounts and group pages which falsely claimed to be controlled by American activists. These accounts addressed divisive political and social issues in America and were designed to attract American audiences. The IRA's operation also included the purchase of political advertisements on social media in the names of American persons and entities.

Once the IRA-controlled accounts established a widespread following, they began organizing and staging political rallies within the United States. According to the Mueller Report, IRA-controlled accounts were used to announce and promote the events. Once potential attendees RSVP’d to the event page, the IRA-controlled account would then message these individuals to ask if they were interested in serving as an “event coordinator.” The IRA then further promoted the event by contacting US media about the event and directing them to speak with the coordinator. After the event, the IRA-controlled accounts posted videos and photographs of the event. Because the IRA is able to acquire unwitting American assets to contribute to the events, there was no need for any IRA employee to be present at the actual event.

Throughout the 2016 election season, several prominent political figures [including President Trump, Donald J. Trump Jr., Eric Trump, Kellyanne Conway, and Michael Flynn] and various American media outlets responded to, interacted with, or otherwise promoted dozens of tweets, posts, and other political content created by the IRA. By the end of the 2016 U.S. election, the IRA had the ability to reach millions of Americans through their social media accounts. The Mueller Report has confirmed the following information with individual social media companies:

  1. Twitter identified 3,814 IRA-controlled accounts that directly contacted an estimated 1.4 million people. In the ten weeks before the 2016 U.S. presidential election, these accounts posted approximately 175,993 tweets.
  2. Facebook identified 470 IRA-controlled accounts who posted more than 80,000 posts that reached as many as 126 million persons. IRA also paid for 3,500 advertisements.
  3. Instagram identified 170 IRA-controlled accounts that posted approximately 120,000 pieces of content.

Since the details of the IRA’s social media campaign were publicized, big tech companies have been subject to heightened levels of scrutiny regarding their effort to combat misinformation and other foreign interference in American elections. However, many members of Congress were pushing for wide-ranging social media reform even before the release of the Mueller Report.

In April 2018, Facebook Founder and CEO Mark Zuckerberg testified over a two-day period during a joint session of the Senate Commerce and Judiciary Committees and the House Energy and Commerce Committee. These hearings were prompted by the Cambridge Analytica scandal. Cambridge Analytica—a political consulting firm with links to the Trump campaign—harvested the data of an estimated 87 million Facebook users to psychologically profile voters during the 2016 election. Zuckerberg explained that, when functioning properly, Facebook is supposed to collect users’ information so that their advertisements can be tailored to a specific group of people that the third party wishes to target as part of their advertising strategy. In this scenario, the third-parties never receive any Facebook users’ data. However, Cambridge Analytica utilized a loophole in Facebook’s Application Programming Interface (API) that allowed the firm to obtain users’ data after the users accessed a quiz called “thisismydigitallife.” The quiz was created by Aleksandr Kogan, a Russian American who worked at the University of Cambridge. Zuckerberg explained to members of Congress that what Cambridge Analytica was improper, but also admitted that Facebook made a serious mistake in trusting Cambridge Analytica when the firm told Facebook it was not using the data it had collected through the quiz.

Another high-profile hearing occurred on September 5, 2018 when Twitter Co-Founder and CEO Jack Dorsey was called to testify before the Senate Intelligence Committee to discuss foreign influence operations on social media platforms. During this hearing, Dorsey discussed Twitter’s algorithm that prevents the circulation of Tweets that violate the platform’s Terms of Service, including the malicious behavior we saw in the 2016 election. Dorsey also discussed Twitter’s retrospective review of IRA-controlled accounts and how the information gathered is being utilized to quickly identify malicious automated accounts, a tool that the IRA relied heavily on prior to the 2016 election. Lastly, Dorsey briefed the committee on Twitter’s suspicion that other countries—namely Iran—may be launching their own social media campaigns.

With the 2020 election quickly approaching, these social media executives are under pressure to prevent their platform from being abused in the election process. Likewise, the calls for elected officials to increase regulation of social media platforms are growing stronger by the day, especially since Gates joined the conversation.

[Sources: Mueller Report, PBS, Washington Post, CNN, The Guardian, Vox I, Vox II]

Iran starts "smart filtering" of Instagram, may lead to unblocking Facebook, Twitter, YouTube in 2015

According to The Guardian, Iran has started a trial of a "smart filtering" of Instagram photographs, allowing Iranians access to the site but selectively blocking certain posts, such as those by @RichKidsofTehran, which shows wealthy, young Iranians "flaunting their wealth."  If the smart filtering proves successful, Iran may deploy the system on other popular social media like Facebook, Twitter, and YouTube, which currently are blocked in Iran. 

“Presently, the smart-filtering plan is implemented only on one social network in its pilot study phase and this process will continue gradually until the plan is implemented on all networks,” Mahmoud Vaezi, the Iranian Communications Minister, said.

The goal is to have the system in place by June 2015.  Some Iranians expressed fear that the Iranian government would start cracking down on virtual private networks (VPNs), which already allow people in Iran to bypass the blocking of popular websites and social media.

Facebook, Google, Twitter won't comply with Russia's orders to remove info on opposition rally

The Wall Street Journal reports that Facebook, YouTube, and Twitter appear to plan on defying Russia's communications regulator, Roskomnadzor, which has ordered them to block information related to a January 15 rally for opposition leader Alexei Navalny posted on the U.S. social media sites accessible in Russia. Navalny is under house arrest under charges of fraud that his supporters claim are trumped up charges to silence the opposition. 

According to WSJ, Roskomnadzor issued its orders under a new law in Russia that authorizes prosecutors to issue such orders without court authorization or involvement.  

Blog Search

Blog Archive

Categories