The Free Internet Project

Facebook

Meeting Between Facebook, Zuckerberg and Stop Hate for Profit Boycott Group Turns into a Big Fail

Facebook has come under scrutiny due to its handling of hate speech and disinformation posted on the platform. With the Stop Hate for Profit movement, corporations have begun to take steps to hold Facebook accountable for the disinformation that is spread on the platform. So far, more than 400 advertisers, from Coca-Cola to Ford and Lego, have made the pledge to stop advertising on the social media platform, according to NPR. Facebook has faced intense backlash, particularly since the 2016 election, for allowing disinformation and propaganda to be posted freely. The disinformation and hate, or “Fake News” as many may call it, is aimed at misinforming the voters and spreading hateful propaganda, potentially dampening voter participation.

A broad coalition of groups including Color for Change, the Anti-defamation league, and the NAACP, started the campaign Stop Hate for Profit. (For more on the origin, read Politico.) The goal of the campaign is to push Facebook to make much needed changes in its policy guidelines as well as change within the company executive employees. The boycott targets the advertising dollars for which the social media juggernaut relies upon. The campaign has begun to pick up steam with new companies announcing an end to Facebook Ads every day. With this momentum, the group behind the boycott have released a list 10 first steps Facebook can take.   

Stop Hate for Profit is asking that Facebook take accountability, have decency, and provide support to groups most affected by the hate that is spread on the platform. The civil rights leaders behind this movement are focused on making changes at the executive level as well as holding Facebook more accountable for their lackluster terms of service. The top execs currently at Facebook may have a conflict of interests. People contend that Facebook has a duty to make sure misinformation and hate is not spread, but Facebook does not exercise that to the fullest capacity because of their relationships with politicians. Rashad Robinson, president of Color of Change, contends that there needs to be a separation between the people in charge of the content allowed on Facebook and those who are aligned with political figures. The group is asking Facebook to hire an executive with a civil rights background, who can evaluate discriminatory policies and products. Additionally, the group is asking Facebook to expand on what they consider hate speech. The current terms of service that Facebook currently employs are criticized for being ineffective and problematic.   

Facebooks policies and algorithms are among the things the group asks to be changed. Current Facebook policies allow public and private hate groups to exist and also recommend them to many users.  The campaign asks that Facebook remove far-right groups that spread conspiracies, such as QAnon, from the platform. The labeling of inauthentic information that will cause hate and disinformation is also requested. In contrast, Twitter has taken small steps to label hateful content themselves. While many criticize Twitters actions not being far enough, they have taken steps Facebook has yet to take. Through this entire process, Facebook should make transparent to the public all the steps--in the number of ads rejected for hate or disinformation and in the third-party audit of hate spread on the site.  

The group also made a connection between the hate on the Facebook platform and race issues within the company. Stop Hate for Profit, provided a staggering statistic that 42% of Facebook users experience harassment on the platform. This along with the former black employee and two job candidates who filed EEOC complaints points to a culture at Facebook that goes far beyond allowing far-right propaganda and misinformation on the site but highlights a lack of support for users and employees of color. All of this is used to backup why it is essential that Facebook goes beyond making simple statements and actually make steps to create change.

Facebook CEO and cofounder Mark Zuckerberg agreed to meet with the civil rights groups behind the boycott amid the growing number of companies getting behind Stop Profit for Hate. Many have voiced their concerns that Facebook and CEO Zuckerberg are more concerned about messaging that legitimately fixing the underlying problems.  Upon meeting with Mark Zuckerberg on July 7, Stop Hate for Profit released a statement about what they felt was a disappointing and uneventful meeting. The group asserted that Facebook did what they previously feared, only providing surface level rhetoric with no real interest in committing to any real change. Of the ten recommendations, Zuckerberg was only open to addressing hiring a person with a civil rights background. Although he declined to fully commit to that position, if it is created, being a C-suite executive level position. Rashad Robinson tweeted a direct statement, saying that Facebook was not ready to make any changes despite knowing the demands of the group. That view appears to be consistent with a July 2, 2020 report of a remark by Zuckerberg to employees at a virtual town hall: "We're not gonna change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue."

For now, it remains to be seen if the increased pressure from companies pulling advertisements will eventually cause Facebook and Zuckerberg to institute changes that progressive groups have been pushing for years. So far, it appears not.   

--written by Bisola Oni

Why Voters Should Beware: Lessons from Russian Interference in 2016 Election, Political and Racial Polarization on Social Media

Overview of the Russian Interference Issue

The United States prides itself on having an open democracy, with free and fair elections decided by American voters. If Americans want a policy change, then the remedy most commonly called upon is political participation--and the vote. If Americans want change, then they should vote out the problematic politicians and choose public officials to carry out the right policies. However, what if the U.S. voting system is skewed by foreign interference? 

American officials are nearly unanimous in concluding, based on U.S. intelligence, that Russia interfered with the 2016 presidential elections [see, e.g., here; here; and Senate Intelligence Report].  “[U]ndermining confidence in America’s democratic institutions” is what Russia seeks. In 2016, few in the U.S. were even thinking about this type of interference. The US’s guard was down. Russia interfered with the election in various ways including fake campaign advertisements, bots on Twitter and Facebook that pumped out emotionally and politically charged content, and through spread of disinformation or “fake news.” Social media hacking, as opposed to physical-polling-center hacking, is at the forefront of discussion because it can not only change who is in office, but it also can shift American voters’ political beliefs and understanding of political topics or depress voters from voting. 

And, if you think Russia is taking a break this election cycle, you'd be wrong. According to a March 10, 2020 New York Times article, David Porter of the FBI Foreign Influence Task Force says: "We see Russia is willing to conduct more brazen and disruptive influence operations because of how it perceives its conflict with the West."

What Inteference Has to Do with Political Polarization

Facebook and Twitter have been criticized countless times by various organizations, politicians, and the media for facilitating political polarization. The U.S. political system of mainly two dominamnt parties is especially susceptible to political polarization. Individuals belonging to either party become so invested in those party’s beliefs that they do not just see the other party’s members as different but also wrong and detrimental to the future of the country. In the past twenty years, the amount of people who consistently hold conservative views or liberal views went from 10% to 20%, thus showing the increasing division, according to an article in Greater Good Magazine.

Political polarization is facilitated by platforms like Facebook and Twitter because of their content algorithms, which are designed to make the website experience more enjoyable. The Facebook News Feed “ranks stories based on a variety of factors including their history of clicking on links for particular websites,” as described by a Brookings article. Under the algorithm, if a liberal user frequently clicks on liberally skewed content, that is what they are going to see the most. Research shows this algorithm reduced the cross-cutting of political “content by 5 percent for conservatives and 8 percent for liberals.” Thus, the algorithm limits your view of other opinions.

So, you might ask, “Why is that bad? I want to see content more aligned with my beliefs.” Democracy is built on the exchange of varying political views and dissenting opinions. The US has long stood by the reputation of freedom of speech and encouraging a free flow of ideas. This algorithmic grouping of like-minded people can be useful when it comes to hobbies and interests, however when it comes to consistently grouping individuals based on political beliefs, it can have a negative impact on democracy. This grouping causes American users to live in “filter bubbles” that only expose them to content that aligns with their viewpoints. Users tend to find this grouping enjoyable due to the psychological theory of confirmation bias, which means that individuals are more likely to consume content that aligns with their pre-existing beliefs. So, all the articles about Trump successfully leading the country will be the first ranked on a conservative user’s Facebook newsfeed and will also be the most enjoyable for them. This filter bubble is dangerous to a democratic system because the lack of diverse perspectives when consuming news content encourages close-mindedness and increases distrust in anyone who disagrees.

During the 2016 presidential election, the Russian hackers put out various types of fake articles, campaign advertisements, and social media posts that were politically charged on either the liberal or conservative side. Because the Facebook algorithm shows more conservative content to conservatives and same for liberals, hackers had no problem reaching their desired audience quickly and effectively. On Facebook they created thousands of robot computer programs that would enter various interest groups and engage with their target audience. For example, in 2016, a Russian soldier successfully entered a U.S. Facebook group pretending to be a 42-year-old housewife, as reported by Time. He responded to political issues discussed on that group and he used emotional and political buzz words when bringing up political issues and stories. On Twitter, thousands of fake accounts run by Russians and computer robots were used to spread disinformation about Hillary Clinton by continuously mentioning her email scandal from when she was Secretary of State and a fake Democratic pedophilic ring called “Pizzagate.” These robots would spew hashtags like “#MAGA” and “#CrookedHillary” that took up more than a quarter of the content within these hashtags.

Facebook and Twitter’s Response to the 2016 Russian Interference

According to a Wall Street Journal article on May 26, 2020 and a Washington Post article on June 28, 2020, Facebook had an internal review of how Facebook could reduce polarization on its platform following the 2016 election, but CEO Mark Zuckerberg and other executives decided against the recommended changes because it was seen as "paternalistic" and would potentially affect conservatives on Facebook more. 

After becoming under increasing fire from critics for allowing misinformation and hate speech to go unchecked on Facebook, the company announced some changes to "fight polarization" on May 27, 2020. This initiative included a recalibration of each user’s Facebook News Feed which would prioritize their family and friends’ content over divisive news content. Their reasoning was that data shows people are more likely to have meaningful discourse with people they know, and this would foster healthy debate rather than ineffective, one-off conversations. They also mentioned a policy directly targeting the spread of disinformation on the platform. They say they have implemented an independent-fact-checking program that will automatically check content in over 50 languages around the world for false information.  Disinformation that will potentially contribute to “imminent violence, physical harm, and voter suppression,” will be removed. 

But those modest changes weren't enough to mollify Facebook's critics. Amidst the mass nationwide protests of the Minneapolis police officer Derek Chauvin's brutal killing of George Floyd, nonprofit organizations including Color for Change organized an ad boycott against Facebook. Over 130 companies agreed to remove their ads from Facebook during July or longer. That led Zuckerberg to change his position on exempting politicians from fact checking or the company's general policy on misinformation. Zuckerberg said that politicians would now be subject to the same policy as every other Facebook user and would be flagged if they disseminated misinformation (or hate speech) that violates Facebook's general policy. 

Twitter’s CEO Jack Dorsey not only implemented a fact-checking policy similar to Facebook, but also admitted that the company needed to be more transparent in their policy making. The fact checking policy “attached fact-checking notices” at the bottom of various tweets alerting users that there could be fake claims in those tweets.  Twitter also decided to forbig all political advertising on its platform. In response to Twitter's flagging of his content, President Trump issued an executive order to increase social media platform regulation and stop them from deleting users’ content and censoring their speech.

With the 2020 U.S. election only four months away, Internet companies are still figuring out how to stop Russian interference and the spread of misinformation, hate speech, and political polarization intended to interfere with the election. Whether Internet companies succeed remains to be seen.  But there's been more policy changes and decisions by Facebook, Twitter, Reddit, Snapchat, Twitch, and other platforms in the last month than all of last year. 

-by Mariam Tabrez

Over 130 Companies Remove Ads from Facebook in #StopHateforProfit Boycott, forcing Mark Zuckerberg to change lax Facebook policy on misinformation and hate content

In the aftermath of Cambridge Analytica scandal in which the company exploited Facebook to target and manipulate swing voters in the 2016 U.S. election, Facebook did an internal review to examine the company's role in spreading misinformation and fake news that may have affected the election, as CEO Mark Zuckerberg announced. In 2018, Zuckerberg announced that Facebook was making changes to be better prepared to stop misinformation in the 2020 election. Critics criticized the changes as modest, however. As WSJ reporters Jeff Horwitz and Deepa Seetharaman detailed, Facebook executives largely rejected the internal study's recommendations to reduce polarization on Facebook. Doing so might be "paternalistic" and might open Facebook up to criticisms of being biased against conservatives.

Despite the concerns about fake news and misinformation affecting the 2020 election, Facebook took the position that fact checking for misinformation did not apply to the posts and ads by politicians in the same way as they applied to everyone else. Facebook's policy was even more permissive to political ads and politicians. As shown below, Facebook justified this hands-off position as advancing political speech: "Our approach is grounded in Facebook's fundamental belief in free expression, respect for the democratic process, and the belief that, especially in mature democracies with a free press, political speech is the most scrutinized speech there is. Just as critically, by limiting political speech we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words."

Facebook's Fact-Checking Exception for Politicians and Political Ads

By contrast, Twitter CEO Jack Dorsey decided to ban political ads in 2019 and to monitor the content politicians just as Twitter does with all other users for misinformation and other violations of Twitter's policy.  Yet Zuckerberg persisted in his "hands off" approach: "“I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online." Zuckerberg even said Twitter was wrong to add warnings to two of President Trump's tweets as misleading (regarding mail-in ballots) and glorifying violence (Trump said, "When the looting starts, the shooting starts" regarding the protests of the Minneapolis police Derek Chauvin killing of George Floyd)  Back in October 2019, Zuckerberg defended his approach in the face of withering questioning by Rep. Alexandria Ocasio-Cortez. 

 

In May and June 2020, Zuckerberg persisted in his "hands off" approach. Some Facebook employees quit in protest, while others staged a walkout.  Yet Zuckerberg still persisted. 

On June 17, 2020, Color of Change, which is "the nation’s largest online racial justice organization," organized with NAACP, Anti-Defamation League, Sleeping Giants, Free Press, and Common Sense Media a boycott of advertising on Facebook for the month of July. The boycott was labeled #StopHateforProfit. Within just 10 days, over 130 companies joined the ad boycott of Facebook.  It included many large companies, such as Ben and Jerry's, Coca-Cola, Dockers, Eddie Bauer, Levi's, The North Face, REI, Unilver, and Verizon. 

On June 26, 2020, Zuckerberg finally announced some changes to Facebook's policy.  The biggest changes:

(1) Moderating hateful content in ads. As Zuckerberg explained on his Facebook page, "We already restrict certain types of content in ads that we allow in regular posts, but we want to do more to prohibit the kind of divisive and inflammatory language that has been used to sow discord. So today we're prohibiting a wider category of hateful content in ads. Specifically, we're expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others. We're also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them."

(2) Adding labels to posts, including from candidates, that may violate Facebook's policy. As Zuckerberg explained, "Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

"We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society -- but we'll add a prompt to tell people that the content they're sharing may violate our policies.

"To clarify one point: there is no newsworthiness exemption to content that incites violence or suppresses voting. Even if a politician or government official says it, if we determine that content may lead to violence or deprive people of their right to vote, we will take that content down. Similarly, there are no exceptions for politicians in any of the policies I'm announcing here today." 

Facebook's new labeling of content of candidates sounds very similar to what Zuckerberg criticized Twitter as being wrong. And Facebook's new policy on moderating hateful content in ads that "are a threat to the physical safety, health or survival of others," including "people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status," seems a positive step to prevent Facebook being a platform to sow racial discord, which is a goal of Russian operatives according to U.S. intelligence. 

Facebook new policy on moderation of political ads and posts by politicians and others

The organizers of the boycott, however, were not impressed with Facebook's changes. They issued a statement quoted by NPR: "None of this will be vetted or verified — or make a dent in the problem on the largest social media platform on the planet. We have been down this road before with Facebook. They have made apologies in the past. They have taken meager steps after each catastrophe where their platform played a part. But this has to end now."

 

Should tech companies do more for election security?: hard lessons from Russian social media warfare in 2016 U.S. elections

Bill Gates, founder of Microsoft, joined the growing number of high-profile individuals demanding that the U.S. government step up its regulation of big tech companies. In a June 2019 interview at the Economic Club of Washington, DC, Gates said, “Technology has become so central that governments have to think: What does that mean about elections?” Gates focused on the need to reform user privacy rights and data security.

This concern comes following the details of a Russian-led social media campaign to “sow discord in the U.S. political system through what it termed ‘information warfare’” outlined in Volume I Section II of the Mueller Report.  According to the Mueller Report, a Russian-based organization, known as the Internet Research Agency (IRA), “carried out a social media campaign that favored presidential candidate Donald J. Trump and disparaged presidential candidate Hillary Clinton.” As early as 2014, IRA employees traveled to the United States on intelligence-gathering missions to obtain information and photographs for use in their social media posts. After returning to St. Petersburg, IRA agents began creating and operating social media accounts and group pages which falsely claimed to be controlled by American activists. These accounts addressed divisive political and social issues in America and were designed to attract American audiences. The IRA's operation also included the purchase of political advertisements on social media in the names of American persons and entities.

Once the IRA-controlled accounts established a widespread following, they began organizing and staging political rallies within the United States. According to the Mueller Report, IRA-controlled accounts were used to announce and promote the events. Once potential attendees RSVP’d to the event page, the IRA-controlled account would then message these individuals to ask if they were interested in serving as an “event coordinator.” The IRA then further promoted the event by contacting US media about the event and directing them to speak with the coordinator. After the event, the IRA-controlled accounts posted videos and photographs of the event. Because the IRA is able to acquire unwitting American assets to contribute to the events, there was no need for any IRA employee to be present at the actual event.

Throughout the 2016 election season, several prominent political figures [including President Trump, Donald J. Trump Jr., Eric Trump, Kellyanne Conway, and Michael Flynn] and various American media outlets responded to, interacted with, or otherwise promoted dozens of tweets, posts, and other political content created by the IRA. By the end of the 2016 U.S. election, the IRA had the ability to reach millions of Americans through their social media accounts. The Mueller Report has confirmed the following information with individual social media companies:

  1. Twitter identified 3,814 IRA-controlled accounts that directly contacted an estimated 1.4 million people. In the ten weeks before the 2016 U.S. presidential election, these accounts posted approximately 175,993 tweets.
  2. Facebook identified 470 IRA-controlled accounts who posted more than 80,000 posts that reached as many as 126 million persons. IRA also paid for 3,500 advertisements.
  3. Instagram identified 170 IRA-controlled accounts that posted approximately 120,000 pieces of content.

Since the details of the IRA’s social media campaign were publicized, big tech companies have been subject to heightened levels of scrutiny regarding their effort to combat misinformation and other foreign interference in American elections. However, many members of Congress were pushing for wide-ranging social media reform even before the release of the Mueller Report.

In April 2018, Facebook Founder and CEO Mark Zuckerberg testified over a two-day period during a joint session of the Senate Commerce and Judiciary Committees and the House Energy and Commerce Committee. These hearings were prompted by the Cambridge Analytica scandal. Cambridge Analytica—a political consulting firm with links to the Trump campaign—harvested the data of an estimated 87 million Facebook users to psychologically profile voters during the 2016 election. Zuckerberg explained that, when functioning properly, Facebook is supposed to collect users’ information so that their advertisements can be tailored to a specific group of people that the third party wishes to target as part of their advertising strategy. In this scenario, the third-parties never receive any Facebook users’ data. However, Cambridge Analytica utilized a loophole in Facebook’s Application Programming Interface (API) that allowed the firm to obtain users’ data after the users accessed a quiz called “thisismydigitallife.” The quiz was created by Aleksandr Kogan, a Russian American who worked at the University of Cambridge. Zuckerberg explained to members of Congress that what Cambridge Analytica was improper, but also admitted that Facebook made a serious mistake in trusting Cambridge Analytica when the firm told Facebook it was not using the data it had collected through the quiz.

Another high-profile hearing occurred on September 5, 2018 when Twitter Co-Founder and CEO Jack Dorsey was called to testify before the Senate Intelligence Committee to discuss foreign influence operations on social media platforms. During this hearing, Dorsey discussed Twitter’s algorithm that prevents the circulation of Tweets that violate the platform’s Terms of Service, including the malicious behavior we saw in the 2016 election. Dorsey also discussed Twitter’s retrospective review of IRA-controlled accounts and how the information gathered is being utilized to quickly identify malicious automated accounts, a tool that the IRA relied heavily on prior to the 2016 election. Lastly, Dorsey briefed the committee on Twitter’s suspicion that other countries—namely Iran—may be launching their own social media campaigns.

With the 2020 election quickly approaching, these social media executives are under pressure to prevent their platform from being abused in the election process. Likewise, the calls for elected officials to increase regulation of social media platforms are growing stronger by the day, especially since Gates joined the conversation.

[Sources: Mueller Report, PBS, Washington Post, CNN, The Guardian, Vox I, Vox II]

Mark Zuckerberg appeals to India before key decision on Internet.org platform, amid protests of net neutrality violation

Ahead of a key decision by India's telecommunications regulatory body, Mark Zuckerberg wrote a blog post in the Times of India to defend his nonprofit Internet.org, which provides free (but limited) Internet access to under-served areas.  The service is called "Free Basics," which enables users to access the Internet but only for a limited number of apps, such as weather, Wikipedia, and, yes, Facebook. Other app developers can apply to Internet.org to be included in Free Basics.   

Zuckerberg visits Colombia to launch free Internet.org app, 1st country in Latin America

Mark Zuckerberg visited Colombia and President Juan Manuel Santos to launch a free Internet.org app for smartphones that will enable subscribers of local phone service Tigo to get free Internet access to a limited number of free services, including Facebook and several government sites such as "Instituto Colombiano para la Evaluación de la Educación, an education assessment service and Agronet, a service that provides information on agriculture and rural development."  The list of free services includes:

1doc3
24 Symbols
AccuWeather
Agronet
BabyCenter & MAMA
Facebook
Girl Effect
Instituto Colombiano para la Evaluación de la Educación
Messenger
Mitula
Para la Vida
Su Dinero
Tambero.com
UNICEF
Wikipedia
YoAprendo

 

Iran starts "smart filtering" of Instagram, may lead to unblocking Facebook, Twitter, YouTube in 2015

According to The Guardian, Iran has started a trial of a "smart filtering" of Instagram photographs, allowing Iranians access to the site but selectively blocking certain posts, such as those by @RichKidsofTehran, which shows wealthy, young Iranians "flaunting their wealth."  If the smart filtering proves successful, Iran may deploy the system on other popular social media like Facebook, Twitter, and YouTube, which currently are blocked in Iran. 

“Presently, the smart-filtering plan is implemented only on one social network in its pilot study phase and this process will continue gradually until the plan is implemented on all networks,” Mahmoud Vaezi, the Iranian Communications Minister, said.

The goal is to have the system in place by June 2015.  Some Iranians expressed fear that the Iranian government would start cracking down on virtual private networks (VPNs), which already allow people in Iran to bypass the blocking of popular websites and social media.

Facebook, Google, Twitter won't comply with Russia's orders to remove info on opposition rally

The Wall Street Journal reports that Facebook, YouTube, and Twitter appear to plan on defying Russia's communications regulator, Roskomnadzor, which has ordered them to block information related to a January 15 rally for opposition leader Alexei Navalny posted on the U.S. social media sites accessible in Russia. Navalny is under house arrest under charges of fraud that his supporters claim are trumped up charges to silence the opposition. 

According to WSJ, Roskomnadzor issued its orders under a new law in Russia that authorizes prosecutors to issue such orders without court authorization or involvement.  

Facebook Issues 3rd Government Requests Report (Censorship and User Information)

Facebook came out last week with its third Government Requests Report that compiles data regarding requests by governments around the world from January to July 2014 to take down information or obtain user information from Facebook. India led the requests for censoring material on Facebook, with 4,960 pieces of content removed upon India's government's request.  Turkey was second (1,893 pieces of content taken down), Pakistan third (1,773 pieces of content taken down), and Germany fourth (34 pieces of content taken down).

Mark Zuckerberg, Facebook-led effort to provide free Internet access in Zambia

Mark Zuckerberg and Facebook helped to start a cooperative effort among telecoms and Facebook to provide free Internet access to countries that lack it.  This initiative--called Internet.org--just launched an app for people in Zambia that provides limited free access to  Facebook, Facebook Messenger, Google, AccuWeather, Unicef, job search sites in Zambia, and women's health and rights organizations.  (More here.)

Pages