New: Facebook and Instagram are in the early stages of creating teams to work on racial bias, per sources + company. Among other things, the teams will study the impacts of its services on different racial groups. https://t.co/6K1Dmyiqqq
Facebook announced it will create teams to study if there is any racial bias within Facebook's and Instagram's algorithms that negatively impact the experience of its minority users experience within the social media platforms. The Equity and Inclusion team at Instagram and the and Inclusivity Product Team for Facebook will tackle a large issue that Facebook has largely ignored in the past. Facebook is under intense scrutiny. Since July 2020, Facebook has faced a massive advertising boycott called Stop Hate for Profit from over five hundred companies such as Coca-Cola, Disney, and Unilever. Facebook has been criticized for a lack of initiative in handling hate speech and attempts to sow racial discord on their platforms, including to suppress Black voters. An independent audit by civil rights experts found the prevalence of hate speech targeting Blacks, Jews, and Muslims on Facebook "especially acute." “The racial justice movement is a moment of real significance for our company,” said Vishal Shah, Instagram’s product director told the Wall Street Journal. “Any bias in our systems and policies runs counter to providing a platform for everyone to express themselves.”
The new research teams will cover what has been was blind spot for Facebook. ln 2019, Facebook employees found that a computer automated moderation algorithm on Instagram was 50 percent more likely to suspend the accounts of Black users compared to white users, according to the Wall Street Journal. This finding was supported by user complaints to the company. After employees reported these findings, they were sworn to secrecy and no further research on the algorithm was done by Facebook. Ultimately, the algorithm was changed, but it was not tested any further for racial bias. Facebook officially stated that that research was stopped because an improper methodology was being applied at the time. As reported by NBC News, employees of Facebook leaked that the automated moderation algorithm automatically detects and deletes hate-speech against white users more effectively then it moderates hate speech against black users.
Facebook's announcement of these teams to study racial bias in these social media platforms are only in the infancy stage. The Instagram Equity and Inclusion team does not have a team leader announced yet. The Inclusivity Product Team will supposedly work closely with a group of Black users and cultural experts to make effective changes. However, Facebook employees previously working on this issue have stated anonymously that they were ignored and discouraged to continue their work. The culture of Facebook as a company and previous inaction to address racial issues have raised skepticism of Facebook's recent initiatives. Time will tell if Facebook is serious about the problem.
ProPublica and FirstDraft conducted a study of "Facebook posts using voting-related keywords — including the terms 'vote by mail,' 'mail-in ballots.' 'voter fraud' and 'stolen elections' — since early April [2020]." According to ProPublica, Donald Trump and conservatives have misprepresented that mail-in voting leads to voter fraud. That assertion has not been substantiated. For example, the Washington Post found states with all-mail elections — Colorado, Oregon and Washington— had only 372 potential irregularities of 14.6 million votes, meaning just or 0.0025%. According to a recent study by Prof. Nicolas Berlibski and others, unsubstantiated claims of voter fraud can negatively affect public confidence in elections. The false claims can significantly undermine the faith of voters, Republican or Democratic, in the electoral process, even when the misinformation is disproved by fact-checks.
In the study, ProPublica and FirstDraft found numerous posts on Facebook that contained misinformation about mail-in ballots. The study concluded: "Of the top 50 posts, ranked by total interactions, that mentioned voting by mail since April 1, 2020 [to July 2020] contained false or substantially misleading claims about voting, particularly about mail-in ballots." ProPublica identified the popular Facebook posts by using engagement data from CrowdTangle.
Facebook’s community standards state that no one shall post content that contains “[m]isrepresentation of the . . . methods for voting or voter registration or census participation.” Facebook CEO Mark Zuckerberg recently said on his Facebook page in June 2020 that he stands against “anything that incites violence or suppresses voting,” and that the company is “running the largest voting information campaign in American history . . . to connect people with authoritative information about the elections . . . crack down on voter suppression, and fight hate speech.” Facebook reportedly removed more than 100,000 posts from Facebook and Instagram that violated the company's community standard against voter suppression from March to May 2020. As ProPublica reported, California Secretary of State Alex Padilla stated that "Facebook has removed more than 90% of false posts referred to it by VoteSure, a 2018 initiative by the state of California to educate voters and flag misinformation."
However, according to the joint project by ProPublica and First Draft, Facebook is still falling well short in the efforts to stop election misinformation. Facebook fails to take down posts from individual accounts and group pages that contain false claims about mail-in ballots and voter fraud, including some portraying "people of color as the face of voter fraud."
Facebook is reported to be considering banning political ads in the days before the election, but that hardly touches the core of the truly rampant fraud— misinformation of the public about mail ballots. False claims are far more widespread in posts than ads, according to the ProPublica and FirstDraft study.
Facebook recently reported that it removed various networks, accounts, and pages from its Facebook and Instagram platforms for violations of its foreign interference policy.
Facebook defines “foreign interference” as “coordinated inauthentic behavior on behalf of a foreign or governmental entity.” Thus, removals resulting from a violation of the foreign interference policy are based on user behavior – not content. The removed networks originated in Canada and Ecuador, Brazil, Ukraine, and the United States.
According to Nathanial Gleicher, Facebook’s Head of Security Policy, these networks involved “coordinated inauthentic behavior” (CIB). This means individuals within each network coordinated with each other through fake accounts to mislead people about who they were and what they were doing. The network removals resulted from the focus on domestic audiences and associations with commercial entities, political campaigns, and political offices.
As outlined in Facebook's report, the Canada and Ecuador network focused its activities on Argentina, Ecuador, El Salvador, Chile, Uruguay, and Venezuela. Individual accounts and pages in this network centered on elections, taking part in local debates on both sides. Some individuals would create fake accounts, posing as locals of the countries they targeted; others posed as “independent” news platforms in the countries they targeted. This network alone had 41 accounts and 77 pages on Facebook, and another 56 Instagram accounts; 274,000 followers on one or more of the 77 Facebook pages and 78,000 followers on Instagram; spent $1.38 billion on Facebook advertising.
The Brazil network spawned 35 Facebook accounts, 14 Facebook pages, 1 Facebook group, and 38 Instagram accounts. The Brazil network’s efforts used a hoard of fake and duplicate accounts – some posing as reporters, others posting fictitious news articles, and pages alleging to be news sources. This network collected nearly 883,000 followers, 350 group followers, 917,000 followers on their Instagram accounts, and it also spent $1500 on Facebook advertising.
The Ukraine network created 72 fake Facebook accounts, 35 pages, and 13 Instagram accounts. According to Facebook, this account was most active during the 2019 parliamentary and presidential elections in Ukraine. Nearly 766,000 followed one or more of this network’s fake pages, and 3,800 people followed at least one of the Instagram accounts.
The United States network possessed 54 Facebook accounts, 50 pages, and 4 Instagram accounts. Individuals in this network posed as residents of Florida – posting and commenting on their own content to make it appear more popular. Several of the network’s pages had ties to a hate group banned by Facebook in 2018. According to Facebook, this network was most active between 2015-2017. This network gained 260,000 followers on at least one of its Facebook pages and nearly 61,500 followers on Instagram. The network also spent nearly $308,000 on Facebook advertising.
In the past year alone, Facebook has removed nearly two million fake accounts and dismantled 18 coordinated public manipulation networks. Authentic decision making about voting is the cornerstone of democracy. Every twenty minutes, one million links are shared, twenty million friend requests are sent, and three million messages are sent. Despite Facebook’s efforts, it’s likely we will encounter foreign interference in one way or another online. So, each of us must take steps to protect ourselves from fake accounts and foreign manipulation.
Facebook for has been under fire over the spread of misinformation connected with Russian involvement in the 2016 U.S. presidential election. In April 2018, the idea for an independent oversight board was discussed when CEO Mark Zuckerberg testified before Congress.
">Organizers of the Facebook ad boycott met with Zuckerberg. It didn't go well.
"The meeting we just left was a disappointment," said Rashad Robinson, the president of Color of Change. "[Facebook] showed up to the meeting expecting an 'A' for attendance." https://t.co/Q89YxmAc0k
Facebook has come under scrutiny due to its handling of hate speech and disinformation posted on the platform. With the Stop Hate for Profit movement, corporations have begun to take steps to hold Facebook accountable for the disinformation that is spread on the platform. So far, more than 400 advertisers, from Coca-Cola to Ford and Lego, have made the pledge to stop advertising on the social media platform, according to NPR. Facebook has faced intense backlash, particularly since the 2016 election, for allowing disinformation and propaganda to be posted freely. The disinformation and hate, or “Fake News” as many may call it, is aimed at misinforming the voters and spreading hateful propaganda, potentially dampening voter participation.
A broad coalition of groups including Color for Change, the Anti-defamation league, and the NAACP, started the campaign Stop Hate for Profit. (For more on the origin, read Politico.) The goal of the campaign is to push Facebook to make much needed changes in its policy guidelines as well as change within the company executive employees. The boycott targets the advertising dollars for which the social media juggernaut relies upon. The campaign has begun to pick up steam with new companies announcing an end to Facebook Ads every day. With this momentum, the group behind the boycott have released a list 10 first steps Facebook can take.
Stop Hate for Profit is asking that Facebook take accountability, have decency, and provide support to groups most affected by the hate that is spread on the platform. The civil rights leaders behind this movement are focused on making changes at the executive level as well as holding Facebook more accountable for their lackluster terms of service. The top execs currently at Facebook may have a conflict of interests. People contend that Facebook has a duty to make sure misinformation and hate is not spread, but Facebook does not exercise that to the fullest capacity because of their relationships with politicians. Rashad Robinson, president of Color of Change, contends that there needs to be a separation between the people in charge of the content allowed on Facebook and those who are aligned with political figures. The group is asking Facebook to hire an executive with a civil rights background, who can evaluate discriminatory policies and products. Additionally, the group is asking Facebook to expand on what they consider hate speech. The current terms of service that Facebook currently employs are criticized for being ineffective and problematic.
Facebooks policies and algorithms are among the things the group asks to be changed. Current Facebook policies allow public and private hate groups to exist and also recommend them to many users. The campaign asks that Facebook remove far-right groups that spread conspiracies, such as QAnon, from the platform. The labeling of inauthentic information that will cause hate and disinformation is also requested. In contrast, Twitter has taken small steps to label hateful content themselves. While many criticize Twitters actions not being far enough, they have taken steps Facebook has yet to take. Through this entire process, Facebook should make transparent to the public all the steps--in the number of ads rejected for hate or disinformation and in the third-party audit of hate spread on the site.
The group also made a connection between the hate on the Facebook platform and race issues within the company. Stop Hate for Profit, provided a staggering statistic that 42% of Facebook users experience harassment on the platform. This along with the former black employee and two job candidates who filed EEOC complaints points to a culture at Facebook that goes far beyond allowing far-right propaganda and misinformation on the site but highlights a lack of support for users and employees of color. All of this is used to backup why it is essential that Facebook goes beyond making simple statements and actually make steps to create change.
Facebook CEO and cofounder Mark Zuckerberg agreed to meet with the civil rights groups behind the boycott amid the growing number of companies getting behind Stop Profit for Hate. Many have voiced their concerns that Facebook and CEO Zuckerberg are more concerned about messaging that legitimately fixing the underlying problems. Upon meeting with Mark Zuckerberg on July 7, Stop Hate for Profit released a statement about what they felt was a disappointing and uneventful meeting. The group asserted that Facebook did what they previously feared, only providing surface level rhetoric with no real interest in committing to any real change. Of the ten recommendations, Zuckerberg was only open to addressing hiring a person with a civil rights background. Although he declined to fully commit to that position, if it is created, being a C-suite executive level position. Rashad Robinson tweeted a direct statement, saying that Facebook was not ready to make any changes despite knowing the demands of the group. That view appears to be consistent with a July 2, 2020 report of a remark by Zuckerberg to employees at a virtual town hall: "We're not gonna change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue."
For now, it remains to be seen if the increased pressure from companies pulling advertisements will eventually cause Facebook and Zuckerberg to institute changes that progressive groups have been pushing for years. So far, it appears not.
The United States prides itself on having an open democracy, with free and fair elections decided by American voters. If Americans want a policy change, then the remedy most commonly called upon is political participation--and the vote. If Americans want change, then they should vote out the problematic politicians and choose public officials to carry out the right policies. However, what if the U.S. voting system is skewed by foreign interference?
American officials are nearly unanimous in concluding, based on U.S. intelligence, that Russia interfered with the 2016 presidential elections [see, e.g., here; here; and Senate Intelligence Report]. “[U]ndermining confidence in America’s democratic institutions” is what Russia seeks. In 2016, few in the U.S. were even thinking about this type of interference. The US’s guard was down. Russia interfered with the election in various ways including fake campaign advertisements, bots on Twitter and Facebook that pumped out emotionally and politically charged content, and through spread of disinformation or “fake news.” Social media hacking, as opposed to physical-polling-center hacking, is at the forefront of discussion because it can not only change who is in office, but it also can shift American voters’ political beliefs and understanding of political topics or depress voters from voting.
And, if you think Russia is taking a break this election cycle, you'd be wrong. According to a March 10, 2020 New York Times article, David Porter of the FBI Foreign Influence Task Force says: "We see Russia is willing to conduct more brazen and disruptive influence operations because of how it perceives its conflict with the West."
"David Porter, an assistant section chief with the FBI’s Foreign Influence Task Force, accused Russia of conducting brazen operations aimed at spreading disinformation, exploiting" division and sowing doubt about the integrity of U.S. elections" @ETuckerAPhttps://t.co/D0WmNBfDWj
What Inteference Has to Do with Political Polarization
Facebook and Twitter have been criticized countless times by various organizations, politicians, and the media for facilitating political polarization. The U.S. political system of mainly two dominamnt parties is especially susceptible to political polarization. Individuals belonging to either party become so invested in those party’s beliefs that they do not just see the other party’s members as different but also wrong and detrimental to the future of the country. In the past twenty years, the amount of people who consistently hold conservative views or liberal views went from 10% to 20%, thus showing the increasing division, according to an article in Greater Good Magazine.
Political polarization is facilitated by platforms like Facebook and Twitter because of their content algorithms, which are designed to make the website experience more enjoyable. The Facebook News Feed “ranks stories based on a variety of factors including their history of clicking on links for particular websites,” as described by a Brookings article. Under the algorithm, if a liberal user frequently clicks on liberally skewed content, that is what they are going to see the most. Research shows this algorithm reduced the cross-cutting of political “content by 5 percent for conservatives and 8 percent for liberals.” Thus, the algorithm limits your view of other opinions.
So, you might ask, “Why is that bad? I want to see content more aligned with my beliefs.” Democracy is built on the exchange of varying political views and dissenting opinions. The US has long stood by the reputation of freedom of speech and encouraging a free flow of ideas. This algorithmic grouping of like-minded people can be useful when it comes to hobbies and interests, however when it comes to consistently grouping individuals based on political beliefs, it can have a negative impact on democracy. This grouping causes American users to live in “filter bubbles” that only expose them to content that aligns with their viewpoints. Users tend to find this grouping enjoyable due to the psychological theory of confirmation bias, which means that individuals are more likely to consume content that aligns with their pre-existing beliefs. So, all the articles about Trump successfully leading the country will be the first ranked on a conservative user’s Facebook newsfeed and will also be the most enjoyable for them. This filter bubble is dangerous to a democratic system because the lack of diverse perspectives when consuming news content encourages close-mindedness and increases distrust in anyone who disagrees.
During the 2016 presidential election, the Russian hackers put out various types of fake articles, campaign advertisements, and social media posts that were politically charged on either the liberal or conservative side. Because the Facebook algorithm shows more conservative content to conservatives and same for liberals, hackers had no problem reaching their desired audience quickly and effectively. On Facebook they created thousands of robot computer programs that would enter various interest groups and engage with their target audience. For example, in 2016, a Russian soldier successfully entered a U.S. Facebook group pretending to be a 42-year-old housewife, as reported by Time. He responded to political issues discussed on that group and he used emotional and political buzz words when bringing up political issues and stories. On Twitter, thousands of fake accounts run by Russians and computer robots were used to spread disinformation about Hillary Clinton by continuously mentioning her email scandal from when she was Secretary of State and a fake Democratic pedophilic ring called “Pizzagate.” These robots would spew hashtags like “#MAGA” and “#CrookedHillary” that took up more than a quarter of the content within these hashtags.
Facebook and Twitter’s Response to the 2016 Russian Interference
According to a Wall Street Journal article on May 26, 2020 and a Washington Post article on June 28, 2020, Facebook had an internal review of how Facebook could reduce polarization on its platform following the 2016 election, but CEO Mark Zuckerberg and other executives decided against the recommended changes because it was seen as "paternalistic" and would potentially affect conservatives on Facebook more.
After becoming under increasing fire from critics for allowing misinformation and hate speech to go unchecked on Facebook, the company announced some changes to "fight polarization" on May 27, 2020. This initiative included a recalibration of each user’s Facebook News Feed which would prioritize their family and friends’ content over divisive news content. Their reasoning was that data shows people are more likely to have meaningful discourse with people they know, and this would foster healthy debate rather than ineffective, one-off conversations. They also mentioned a policy directly targeting the spread of disinformation on the platform. They say they have implemented an independent-fact-checking program that will automatically check content in over 50 languages around the world for false information. Disinformation that will potentially contribute to “imminent violence, physical harm, and voter suppression,” will be removed.
But those modest changes weren't enough to mollify Facebook's critics. Amidst the mass nationwide protests of the Minneapolis police officer Derek Chauvin's brutal killing of George Floyd, nonprofit organizations including Color for Change organized an ad boycott against Facebook. Over 130 companies agreed to remove their ads from Facebook during July or longer. That led Zuckerberg to change his position on exempting politicians from fact checking or the company's general policy on misinformation. Zuckerberg said that politicians would now be subject to the same policy as every other Facebook user and would be flagged if they disseminated misinformation (or hate speech) that violates Facebook's general policy.
Twitter’s CEO Jack Dorsey not only implemented a fact-checking policy similar to Facebook, but also admitted that the company needed to be more transparent in their policy making. The fact checking policy “attached fact-checking notices” at the bottom of various tweets alerting users that there could be fake claims in those tweets. Twitter also decided to forbig all political advertising on its platform. In response to Twitter's flagging of his content, President Trump issued an executive order to increase social media platform regulation and stop them from deleting users’ content and censoring their speech.
With the 2020 U.S. election only four months away, Internet companies are still figuring out how to stop Russian interference and the spread of misinformation, hate speech, and political polarization intended to interfere with the election. Whether Internet companies succeed remains to be seen. But there's been more policy changes and decisions by Facebook, Twitter, Reddit, Snapchat, Twitch, and other platforms in the last month than all of last year.
In the aftermath of Cambridge Analytica scandal in which the company exploited Facebook to target and manipulate swing voters in the 2016 U.S. election, Facebook did an internal review to examine the company's role in spreading misinformation and fake news that may have affected the election, as CEO Mark Zuckerberg announced. In 2018, Zuckerberg announced that Facebook was making changes to be better prepared to stop misinformation in the 2020 election. Critics criticized the changes as modest, however. As WSJ reporters Jeff Horwitz and Deepa Seetharaman detailed, Facebook executives largely rejected the internal study's recommendations to reduce polarization on Facebook. Doing so might be "paternalistic" and might open Facebook up to criticisms of being biased against conservatives.
Despite the concerns about fake news and misinformation affecting the 2020 election, Facebook took the position that fact checking for misinformation did not apply to the posts and ads by politicians in the same way as they applied to everyone else. Facebook's policy was even more permissive to political ads and politicians. As shown below, Facebook justified this hands-off position as advancing political speech: "Our approach is grounded in Facebook's fundamental belief in free expression, respect for the democratic process, and the belief that, especially in mature democracies with a free press, political speech is the most scrutinized speech there is. Just as critically, by limiting political speech we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words."
Facebook's Fact-Checking Exception for Politicians and Political Ads
In May and June 2020, Zuckerberg persisted in his "hands off" approach. Some Facebook employees quit in protest, while others staged a walkout. Yet Zuckerberg still persisted.
On June 17, 2020, Color of Change, which is "the nation’s largest online racial justice organization," organized with NAACP, Anti-Defamation League, Sleeping Giants, Free Press, and Common Sense Media a boycott of advertising on Facebook for the month of July. The boycott was labeled #StopHateforProfit. Within just 10 days, over 130 companies joined the ad boycott of Facebook. It included many large companies, such as Ben and Jerry's, Coca-Cola, Dockers, Eddie Bauer, Levi's, The North Face, REI, Unilver, and Verizon.
On June 26, 2020, Zuckerberg finally announced some changes to Facebook's policy. The biggest changes:
(1) Moderating hateful content in ads. As Zuckerberg explained on his Facebook page, "We already restrict certain types of content in ads that we allow in regular posts, but we want to do more to prohibit the kind of divisive and inflammatory language that has been used to sow discord. So today we're prohibiting a wider category of hateful content in ads. Specifically, we're expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others. We're also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them."
(2) Adding labels to posts, including from candidates, that may violate Facebook's policy. As Zuckerberg explained, "Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.
"We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society -- but we'll add a prompt to tell people that the content they're sharing may violate our policies.
"To clarify one point: there is no newsworthiness exemption to content that incites violence or suppresses voting. Even if a politician or government official says it, if we determine that content may lead to violence or deprive people of their right to vote, we will take that content down. Similarly, there are no exceptions for politicians in any of the policies I'm announcing here today."
Facebook's new labeling of content of candidates sounds very similar to what Zuckerberg criticized Twitter as being wrong. And Facebook's new policy on moderating hateful content in ads that "are a threat to the physical safety, health or survival of others," including "people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status," seems a positive step to prevent Facebook being a platform to sow racial discord, which is a goal of Russian operatives according to U.S. intelligence.
Facebook new policy on moderation of political ads and posts by politicians and others
The organizers of the boycott, however, were not impressed with Facebook's changes. They issued a statement quoted by NPR: "None of this will be vetted or verified — or make a dent in the problem on the largest social media platform on the planet. We have been down this road before with Facebook. They have made apologies in the past. They have taken meager steps after each catastrophe where their platform played a part. But this has to end now."
Bill Gates, founder of Microsoft, joined the growing number of high-profile individuals demanding that the U.S. government step up its regulation of big tech companies. In a June 2019 interview at the Economic Club of Washington, DC, Gates said, “Technology has become so central that governments have to think: What does that mean about elections?” Gates focused on the need to reform user privacy rights and data security.
This concern comes following the details of a Russian-led social media campaign to “sow discord in the U.S. political system through what it termed ‘information warfare’” outlined in Volume I Section II of the Mueller Report. According to the Mueller Report, a Russian-based organization, known as the Internet Research Agency (IRA), “carried out a social media campaign that favored presidential candidate Donald J. Trump and disparaged presidential candidate Hillary Clinton.” As early as 2014, IRA employees traveled to the United States on intelligence-gathering missions to obtain information and photographs for use in their social media posts. After returning to St. Petersburg, IRA agents began creating and operating social media accounts and group pages which falsely claimed to be controlled by American activists. These accounts addressed divisive political and social issues in America and were designed to attract American audiences. The IRA's operation also included the purchase of political advertisements on social media in the names of American persons and entities.
Once the IRA-controlled accounts established a widespread following, they began organizing and staging political rallies within the United States. According to the Mueller Report, IRA-controlled accounts were used to announce and promote the events. Once potential attendees RSVP’d to the event page, the IRA-controlled account would then message these individuals to ask if they were interested in serving as an “event coordinator.” The IRA then further promoted the event by contacting US media about the event and directing them to speak with the coordinator. After the event, the IRA-controlled accounts posted videos and photographs of the event. Because the IRA is able to acquire unwitting American assets to contribute to the events, there was no need for any IRA employee to be present at the actual event.
Throughout the 2016 election season, several prominent political figures [including President Trump, Donald J. Trump Jr., Eric Trump, Kellyanne Conway, and Michael Flynn] and various American media outlets responded to, interacted with, or otherwise promoted dozens of tweets, posts, and other political content created by the IRA. By the end of the 2016 U.S. election, the IRA had the ability to reach millions of Americans through their social media accounts. The Mueller Report has confirmed the following information with individual social media companies:
Twitter identified 3,814 IRA-controlled accounts that directly contacted an estimated 1.4 million people. In the ten weeks before the 2016 U.S. presidential election, these accounts posted approximately 175,993 tweets.
Facebook identified 470 IRA-controlled accounts who posted more than 80,000 posts that reached as many as 126 million persons. IRA also paid for 3,500 advertisements.
Instagram identified 170 IRA-controlled accounts that posted approximately 120,000 pieces of content.
Since the details of the IRA’s social media campaign were publicized, big tech companies have been subject to heightened levels of scrutiny regarding their effort to combat misinformation and other foreign interference in American elections. However, many members of Congress were pushing for wide-ranging social media reform even before the release of the Mueller Report.
In April 2018, Facebook Founder and CEO Mark Zuckerberg testified over a two-day period during a joint session of the Senate Commerce and Judiciary Committees and the House Energy and Commerce Committee. These hearings were prompted by the Cambridge Analytica scandal. Cambridge Analytica—a political consulting firm with links to the Trump campaign—harvested the data of an estimated 87 million Facebook users to psychologically profile voters during the 2016 election. Zuckerberg explained that, when functioning properly, Facebook is supposed to collect users’ information so that their advertisements can be tailored to a specific group of people that the third party wishes to target as part of their advertising strategy. In this scenario, the third-parties never receive any Facebook users’ data. However, Cambridge Analytica utilized a loophole in Facebook’s Application Programming Interface (API) that allowed the firm to obtain users’ data after the users accessed a quiz called “thisismydigitallife.” The quiz was created by Aleksandr Kogan, a Russian American who worked at the University of Cambridge. Zuckerberg explained to members of Congress that what Cambridge Analytica was improper, but also admitted that Facebook made a serious mistake in trusting Cambridge Analytica when the firm told Facebook it was not using the data it had collected through the quiz.
Another high-profile hearing occurred on September 5, 2018 when Twitter Co-Founder and CEO Jack Dorsey was called to testify before the Senate Intelligence Committee to discuss foreign influence operations on social media platforms. During this hearing, Dorsey discussed Twitter’s algorithm that prevents the circulation of Tweets that violate the platform’s Terms of Service, including the malicious behavior we saw in the 2016 election. Dorsey also discussed Twitter’s retrospective review of IRA-controlled accounts and how the information gathered is being utilized to quickly identify malicious automated accounts, a tool that the IRA relied heavily on prior to the 2016 election. Lastly, Dorsey briefed the committee on Twitter’s suspicion that other countries—namely Iran—may be launching their own social media campaigns.
With the 2020 election quickly approaching, these social media executives are under pressure to prevent their platform from being abused in the election process. Likewise, the calls for elected officials to increase regulation of social media platforms are growing stronger by the day, especially since Gates joined the conversation.
Ahead of a key decision by India's telecommunications regulatory body, Mark Zuckerberg wrote a blog post in the Times of India to defend his nonprofit Internet.org, which provides free (but limited) Internet access to under-served areas. The service is called "Free Basics," which enables users to access the Internet but only for a limited number of apps, such as weather, Wikipedia, and, yes, Facebook. Other app developers can apply to Internet.org to be included in Free Basics.
Mark Zuckerberg visited Colombia and President Juan Manuel Santos to launch a free Internet.org app for smartphones that will enable subscribers of local phone service Tigo to get free Internet access to a limited number of free services, including Facebook and several government sites such as "Instituto Colombiano para la Evaluación de la Educación, an education assessment service and Agronet, a service that provides information on agriculture and rural development." The list of free services includes:
1doc3
24 Symbols
AccuWeather
Agronet
BabyCenter & MAMA
Facebook
Girl Effect
Instituto Colombiano para la Evaluación de la Educación
Messenger
Mitula
Para la Vida
Su Dinero
Tambero.com
UNICEF
Wikipedia
YoAprendo