The Free Internet Project

social media

Turkey amends Internet Law to Impose Stiff Requirements on "Social Network Providers"

The Turkish government is set to amend the existing Internet Law No. 5651 (on the Regulation of Broadcasts via the Internet and the Prevention of Crimes Committed Through such Broadcasts). According to VOA, after his daughter and her husband were insulted on social media, President Recep Tayyip Erdogan declared in July 2020 that social media are “immoral” and will be “completely banned or controlled.” 

Turkey's new Internet law, which goes into effect October 1, requires social media companies--called "social network providers" under the new law--like TikTok and Facebook to register local offices in Turkey, subjecting them to local laws and tax regulations. Social media companies would face crippling restriction on their bandwidths if the authority found them noncompliant with the new regulation. Failure to comply will also result in substantial fines issued to their mandatory offices in Turkey, once the new legislation has passed.

Social network platforms will also have to store data of Turkish users in Turkey (i.e., data localization). In addition, the social network providers that are accessed more than 1 million times daily are required (1) to have a notice-and-takedown procedure in which people can submit a notice of a violation of rights based on content on the network, and the company must remove the offending material within 48 hours, and (2) to publish transparency reports regarding the notices and takedowns. Accordingly to Lexology, "An administrative fine of TRY 5 million (approx. EUR 615,000) may be imposed for incompliance with takedown request handling and TRY 10 million may be imposed for incompliance with the reporting requirements." Finally, the new law recognizes that people in Turkey have a right to be forgotten and can request their names be removed from webpages as ordered by a court.

Critics of the new law fear that it will be used to censor political dissent. "If the social media platforms decide to establish offices in Turkey, then they will be compelled to remove the content . . . [subject to] so-called personal rights violations," said Professor Yaman Akdeniz, co-founder of the Freedom of Expression Society, an advocacy group in Istanbul, told VOA.

Such attempt to curtail the access to online medias is not unprecedented in Turkey, with over 400,000 web pages banned and thousands of people prosecuted for their posts, according to VOA. In response, people in Turkey utilized VPNs and proxies to counteract the suppression of Internet censorship. However, limiting bandwidths will likely overpower the use of VPN in restraining the accessibility to online contents that the government deems problematic. Devlet Bahçeli, the president of the MHP (Nationalist Movement Party), has already called for state intervention in the use of VPN and proxies, to ensure “the clean use of social media.” He also promised his staunch support for any law proposal in the wake of the Turkish Grand National Assembly, further dimming the light of hope for an accessible and free internet.

The fast-growing popularity of social media in Turkey has drawn people from mainstream medias for their news updates. Atilla Yesilada of Global Source Partners told VOA that one of Erdogan’s primary incentives to propose the stringent legislation is to regain the control of “the flow of information.” Professor Akdeniz also observed that news websites are at risk of facing state censorship and manipulation to “the government’s past injustices, corruption, and irregularity allegations.”  However, as the young generation in Turkey has grown fond of the social media, the attempt to restrain the internet may backfire and alienate those young voters from President Erdogan, warned Yesilada.

Turkey's new Internet law is modeled on Germany's controversial Network Enforcement Act, or NetzDG for short, as explained by EFF: The German "law mandates social media platforms with two million users to name a local representative authorized to act as a focal point for law enforcement and receive content take down requests from public authorities. The law mandates social media companies with more than two million German users to remove or disable content that appears to be “manifestly illegal” within 24 hours of having been alerted of the content."

In summary, Turkey's new Internet law has the following components:

  • Social network providers that are accessed in Turkey more than 1 million times daily must have a local office in Turkey.
  • They must have local storage of data of users in Turkey.
  • They must have a notice and takedown process that allows users to send a notice of unlawful material or violation of rights, after which the company has 48 hours to remove it.
  • They must process right-to-be-forgotten removal of content as ordered by a court. 

For more about Turkey's controversial new law for social network providers, visit Lexology.

--written by Yucheng Cui

Facebook removes Donald Trump post regarding children "almost immune" for violating rules on COVID misinformation; Twitter temporarily suspends Trump campaign account for same COVID misinformation

On August 5, 2020, as reported by the Wall St. Journal, Facebook removed a post from Donald Trump that contained a video of an interview he did with Fox News in which he reportedly said that children are "almost immune from this disease." Trump also said COVID-19 “is going to go away,” and that “schools should open” because “this it will go away like things go away.” A Facebook spokesperson explained to the Verge: "This video includes false claims that a group of people is immune from COVID-19 which is a violation of our policies around harmful COVID misinformation." 

Twitter temporarily suspended the @TeamTrump campaign account from tweeting because of the same content. "The @TeamTrump Tweet you referenced is in violation of the Twitter Rules on COVID-19 misinformation,” Twitter spokesperson Aly Pavela said in a statement to TechCrunch. “The account owner will be required to remove the Tweet before they can Tweet again.” The Trump campaign resumed tweeting so it appears it complied and removed the tweet. 

Neither Facebook nor Twitter provided much explanation of their decisions on their platforms, at least based on our search. They likely interpreted "almost immune from this disease" as misleading because children of every age can be infected by coronavirus and suffer adverse effects, including death (e.g., 6 year old, 9 year old, and 11 year old). In Florida, 23,170 minors tested positive for coronavirus by July 2020, for example. The CDC just published a study on the spread of coronavirus among children at summer camp in Georgia and found extensive infection spread among the children: 

These findings demonstrate that SARS-CoV-2 spread efficiently in a youth-centric overnight setting, resulting in high attack rates among persons in all age groups, despite efforts by camp officials to implement most recommended strategies to prevent transmission. Asymptomatic infection was common and potentially contributed to undetected transmission, as has been previously reported (1–4). This investigation adds to the body of evidence demonstrating that children of all ages are susceptible to SARS-CoV-2 infection (1–3) and, contrary to early reports (5,6), might play an important role in transmission (7,8). 

Experts around the world are conducting studies to learn more about how COVID-19 affects children.  The Smithsonian Magazine compiles a summary of the some of these studies and is well worth reading.  One of the studies from the Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine did examine the hypothesis: "Decreased susceptibility could result from immune cross-protection from other coronaviruses9,10,11, or from non-specific protection resulting from recent infection by other respiratory viruses12, which children experience more frequently than adults." But the study noted: "Direct evidence for decreased susceptibility to SARS-CoV-2 in children has been mixed, but if true could result in lower transmission in the population overall." This inquiry was undertaken because, thus far, children have reported fewer positive tests than adults. According to the Mayo Clinic Staff: "Children of all ages can become ill with coronavirus disease 2019 (COVID-19). But most kids who are infected typically don't become as sick as adults and some might not show any symptoms at all." Moreover, a study from researchers in Berlin found that children "carried the same viral load, a signal of infectiousness." The Smithsonian Magazine article underscores that experts believe more data and studies are needed to understand how COVID-19 affects children.

Speaking of the Facebook removal, Courtney Parella, a spokesperson for the Trump campaign, said: "The President was stating a fact that children are less susceptible to the coronavirus. Another day, another display of Silicon Valley's flagrant bias against this President, where the rules are only enforced in one direction. Social media companies are not the arbiters of truth."

Tik Tok is all the rage, so why did India ban it?

Tiktok is a social medial platform owned by a Chinese firm named Bytedance. The app was first developed in China, but is growing more and more popular especially among teens all over the world for its combination of music, dance and peculiar humor through creating and sharing short videos. Another popular feature is live-streaming, which grants real-time interation between the host and the audience. Users do not even need to speak English to become an overnight hit with millions of followers on Tiktok. 

Tiktok has become phenomenal. The idea of producing short clips is not new – Snapchat and Instagram had similar functions too. And creating videos has been around since YouTube. But, with an enormous user base in China, this new contender surpassed other video-sharing sites and gained incredible popularity. Presently, Tiktok has over 500 million active users worldwide. Tiktok's worldwide success as an Internet platform is rare for a Chinese-based company. China’s strict internet restriction is well-known. By putting up firewalls, mainstream Western social media sites, such as Facebook and Twitter, are inaccessible in China. 

India announced the controversial decision to ban TikTok in its borders. Why? As the border clash between China and India escalated, the Indian government recently banned 59 Chinese apps, including Tiktok, citing concerns over activities prejudicial to the sovereignty and integrity of India, according to the New York Times.  Alternative Indian native platforms such as Glance and Roposo are eager to seek new users after Tiktok’s leave, but watchdog groups are concerned that Indian local apps may also be censored and controlled by the government or exploited for political propaganda. While banning Tiktok could also be a token of revenge against China for the border skirmish, the ban could also be viewed as India’s determination in safeguarding its citizens’ data from foreign manipulation.

Taking the cue from India’s decision, the US is considering a ban on Tiktok too. US Secretary of State Mike Pompeo warned the Americans not using the app unless “you want your private information in the hands of the Chinese Communist Party,” indicating the app is secretly sponsoring users’ data to the Chinese government.  

Having a reputation of exercising a tight grip over the internet environment, the Chinese government is frequently accused of privacy breaches. Bytedance, the Chinese firm that owns Tiktok, encountered several challenges as it expanded market worldwide. In February 2020, Bytedance was fined £4.2million by the US Federal Trade Commission for illegally collecting personal information from children under 13 without requiring parent consent. On July 3, 2020, the head of the UK’s Information Commissioner’s Office announced that Tiktok was undergoing a similar investigation regarding protections of children’s personal data as its open message system permit adults to directly contact children and thus subject children to risks such as online solicitations and harassments.

Of course, data breaches in social media are not uncommon in the modern digital age. Facebook was accused multiple times for harvesting users’ private information without their consent. Thus, banning Tiktok in the name of privacy protection sounds extreme since other breaches of data by social media have not resulted in banning an entire platform in a country. 

Some users have expressed a suspicion that the major impetus for the US ban on Tiktok was the significant role that Tiktok played during the BlackLivesMatter rally. In the pandemic era, Tiktok fostered new political expressions. For example, activists who could not march on the street in person, created videos with hashtag #blacklivesmatter to demonstrate cyber solidarity for racial injustice. As CNN reported, users on Tiktok also live-streamed the street protest, documented police assaulting peaceful demonstrators.  Tiktok lowered the barrier of communication, allowing users from all over the globe to share content and exchange ideas. Apart from showing cute dogs, teenagers’ funny dance steps, and other mundane occurrences, Tiktok also entered the political sphere even there is a lack of a number of politicians being active on the site. Despite the alleged privacy and national security concerns, it is one of the fastest and most unfiltered ways for people to spread messages.

“Any kind of public policy response which is premised on grounds of national security needs to emerge from well-defined criteria, which seems to be absent here,” executive director of the Internet Freedom Foundation Mr. Gupta said to the New York Times. Banning may be a quick fix, but if authorities could ban an app in the name of protecting citizens’ data without showing clear evidence supporting the alleged claim or legal authority for such an extreme action, it sets a dangerous precedent that would greatly impair internet freedom. Of course, there remains the tension that popular Western based social media platforms are still banned in China. 

-written by Candice Wang

 

 

 

Meeting Between Facebook, Zuckerberg and Stop Hate for Profit Boycott Group Turns into a Big Fail

Facebook has come under scrutiny due to its handling of hate speech and disinformation posted on the platform. With the Stop Hate for Profit movement, corporations have begun to take steps to hold Facebook accountable for the disinformation that is spread on the platform. So far, more than 400 advertisers, from Coca-Cola to Ford and Lego, have made the pledge to stop advertising on the social media platform, according to NPR. Facebook has faced intense backlash, particularly since the 2016 election, for allowing disinformation and propaganda to be posted freely. The disinformation and hate, or “Fake News” as many may call it, is aimed at misinforming the voters and spreading hateful propaganda, potentially dampening voter participation.

A broad coalition of groups including Color for Change, the Anti-defamation league, and the NAACP, started the campaign Stop Hate for Profit. (For more on the origin, read Politico.) The goal of the campaign is to push Facebook to make much needed changes in its policy guidelines as well as change within the company executive employees. The boycott targets the advertising dollars for which the social media juggernaut relies upon. The campaign has begun to pick up steam with new companies announcing an end to Facebook Ads every day. With this momentum, the group behind the boycott have released a list 10 first steps Facebook can take.   

Stop Hate for Profit is asking that Facebook take accountability, have decency, and provide support to groups most affected by the hate that is spread on the platform. The civil rights leaders behind this movement are focused on making changes at the executive level as well as holding Facebook more accountable for their lackluster terms of service. The top execs currently at Facebook may have a conflict of interests. People contend that Facebook has a duty to make sure misinformation and hate is not spread, but Facebook does not exercise that to the fullest capacity because of their relationships with politicians. Rashad Robinson, president of Color of Change, contends that there needs to be a separation between the people in charge of the content allowed on Facebook and those who are aligned with political figures. The group is asking Facebook to hire an executive with a civil rights background, who can evaluate discriminatory policies and products. Additionally, the group is asking Facebook to expand on what they consider hate speech. The current terms of service that Facebook currently employs are criticized for being ineffective and problematic.   

Facebooks policies and algorithms are among the things the group asks to be changed. Current Facebook policies allow public and private hate groups to exist and also recommend them to many users.  The campaign asks that Facebook remove far-right groups that spread conspiracies, such as QAnon, from the platform. The labeling of inauthentic information that will cause hate and disinformation is also requested. In contrast, Twitter has taken small steps to label hateful content themselves. While many criticize Twitters actions not being far enough, they have taken steps Facebook has yet to take. Through this entire process, Facebook should make transparent to the public all the steps--in the number of ads rejected for hate or disinformation and in the third-party audit of hate spread on the site.  

The group also made a connection between the hate on the Facebook platform and race issues within the company. Stop Hate for Profit, provided a staggering statistic that 42% of Facebook users experience harassment on the platform. This along with the former black employee and two job candidates who filed EEOC complaints points to a culture at Facebook that goes far beyond allowing far-right propaganda and misinformation on the site but highlights a lack of support for users and employees of color. All of this is used to backup why it is essential that Facebook goes beyond making simple statements and actually make steps to create change.

Facebook CEO and cofounder Mark Zuckerberg agreed to meet with the civil rights groups behind the boycott amid the growing number of companies getting behind Stop Profit for Hate. Many have voiced their concerns that Facebook and CEO Zuckerberg are more concerned about messaging that legitimately fixing the underlying problems.  Upon meeting with Mark Zuckerberg on July 7, Stop Hate for Profit released a statement about what they felt was a disappointing and uneventful meeting. The group asserted that Facebook did what they previously feared, only providing surface level rhetoric with no real interest in committing to any real change. Of the ten recommendations, Zuckerberg was only open to addressing hiring a person with a civil rights background. Although he declined to fully commit to that position, if it is created, being a C-suite executive level position. Rashad Robinson tweeted a direct statement, saying that Facebook was not ready to make any changes despite knowing the demands of the group. That view appears to be consistent with a July 2, 2020 report of a remark by Zuckerberg to employees at a virtual town hall: "We're not gonna change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue."

For now, it remains to be seen if the increased pressure from companies pulling advertisements will eventually cause Facebook and Zuckerberg to institute changes that progressive groups have been pushing for years. So far, it appears not.   

--written by Bisola Oni

What is Parler? An "unbiased social media"? Or platform for conservative Republicans?

Parler (French for "to talk") is a social media plafform started in 2018. Its mission is to be “an unbiased social media focused on real user experiences and engagement." It is touted as an alternative to Twitter that allows users to post content and comment like Twitter--without political bias. Many Republican politicians who believe Twitter is biased against conservatives, have migrated to Parler and are promoting it as a platform. Since Twitter and Snapchat recently moderated some of Donald Trump’s posts that have violated their community standards, conservative Republicans have switched to Parler. Ted Cruz joined Parler as did three Republican politicians Jim Jordan, Elise Stefanik and Nikki Haley, as CNBC reported. Parler may become Republican lawmaker’s and Trump's favorite social media site. Trump’s campaign manager Brad Parscale accused Twitter and Facebook for biased censorship and stated that the campaign team may select an alternative platform, such as Parlor, as reported by the Wall Street Journal. Parler ranked top news app in Apple’s app store and has 1.5 million users in 2020. By comparison, Twitter has over 145 million active users

Content moderation by Internet platforms has become a hot-button issue. In the past, platforms took permissive approaches in the name of free speech, but they soon realized the need to moderate some objectionable content posted by their user. Most people would agree with the idea that despite the importance of free expression and free flow of information, allowing everyone to post anything online may lead to false, illegal, and harmful content being shared. So Internet companies must exercise some moderation of user content, but the unsolved puzzle is: what the standards should be and who should decide them.

Touted by Republicans, Parler attracted many new users in the past few days. However, some users realized that the new hyped platform was not free of content moderation. Besides restraining the commonly prohibitive content outlined in Parler’s Community Guidelines such as spam, fighting words, pornography and criminal solicitation, Parler also makes clear in its User Agreement: “Parler may remove any content and stop your access to the Services at any time and for any reason or no reason, although Parler endeavors to allow all free speech that is lawful and does not infringe the legal rights of others … Although the Parler Guidelines provide guidance to you regarding content that is not proper, Parler is free to remove content and terminate your access to the Services even where the Guidelines have been followed.”

Some users who are liberals were reportedly banned from Parler. Techdirt compiled some of the user who were banned. Parler's banning of liberal users does not appear to be consistent with its motto as an "unbiased social media."  Even some conservative commentators criticized Parler for not abiding by its privacy policy as it asked for a driver's license from its users. The goal of a politically unbiased Internet platform may be a worthy one. But it remains to be seen whether Parler provides such a space. 

--written by Candice Wang

 

New study by Alto Data Analytics casts doubt on effectiveness of fact checking to combat published fake news

As “fake news” continues to plague digital socio-political space, a new form of investigative reporter has risen to combat this disinformation: the fact-checker. Generally, fact-checkers are defined as journalists working to verify digital content by performing additional research on the content’s claim. Whenever a fact-checker uncovers a falsity masquerading as fact (aka fake news), they rebut this deceptive representation through articles, blog posts, and other explanatory comments that illustrate how the statement misleads the public. [More from Reuters] As of 2019, the number of fact-checking outlets across the globe has grown to 188 across 60 countries, according to the Reporters Lab.  

But recent research reveals that this upsurge in fact-checkers may not have that great an impact on defeating digital disinformation. From December 2018 to the European Parliamentary elections in May 2019, big-data firm Alto Data Analytics collected socio-political debate data from across a variety of digital media platforms. This survey served as one of the first studies assessing the success of fact-checking efforts.  Alto’s study examined five European countries: France, Germany, Italy, Poland, and Spain. Focusing on verified fact-checkers in each of these countries, Alto’s Alto Analyzer cloud-based software tracked how users interacted with these trustworthy entities in digital space. Basing their experiment exclusively on Twitter interactions, the Analyzer platform recorded how users interacted with the fact-checkers’ tweets through re-tweets, replies, and mentions. After noting this information, the data-scientists calculated the fact-checkers’ effectiveness in reaching communities most affected by disinformation.

Despite its limitation to 5 select countries, the study yielded discouraging results. In total, the fact-checking outlets in these countries only amounted to between 0.1% and 0.3% of total number of Twitter activity during the period.  Across the five countries in the study, fact-checkers penetrated least successfully in Germany, followed closely by Italy. Conversely, fact-checkers experienced the greatest distributive effect in France. Fact-checkers’ digital presence tended to reach only a few online communities.  The study found that “fact-checkers . . . [were] unable to significantly penetrate the communities which tend to be exposed most frequently to disinformation content.”  In other words, fact-checking efforts reached few individuals, and the ones they did reach were other fact-checkers.  Alto Data notes, however, that their analysis “doesn’t show that the fact-checkers are not effective in the broader socio-political conversation.” But “the reach of fact-checkers is limited, often to those digital communities which are not targets for or are propagating disinformation.”  [Alto Data study]

Alto proposed ideas for future research models on this topic: expanding the study beyond one social media site; conducting research to find effectual discrepancies between various types of digital content—memes, videos, picture, and articles; taking search engine comparisons into account; and providing causal explanations for penetration differences between countries.

Research studies in the United States have also produced results doubting the effectiveness of fact-checkers. A Tulane University study discovered that citizens were more likely to alter their views from reading ideologically consistent media outlets than neutral fact-checking entities. Some studies even suggest that encounters with corrective fact-checking pieces have undesired psychological effects on content consumers, hardening individuals’ partisan positions and perceptions instead of dispelling them. 

These studies suggest that it's incredibly difficult to "unring the bell" of fake news, so to speak.  That is why the proactive efforts of social media companies and online sites to minimize the spread of blatantly fake news related to elections may be the only hope of minimizing its deleterious effects on voters.  

Should tech companies do more for election security?: hard lessons from Russian social media warfare in 2016 U.S. elections

Bill Gates, founder of Microsoft, joined the growing number of high-profile individuals demanding that the U.S. government step up its regulation of big tech companies. In a June 2019 interview at the Economic Club of Washington, DC, Gates said, “Technology has become so central that governments have to think: What does that mean about elections?” Gates focused on the need to reform user privacy rights and data security.

This concern comes following the details of a Russian-led social media campaign to “sow discord in the U.S. political system through what it termed ‘information warfare’” outlined in Volume I Section II of the Mueller Report.  According to the Mueller Report, a Russian-based organization, known as the Internet Research Agency (IRA), “carried out a social media campaign that favored presidential candidate Donald J. Trump and disparaged presidential candidate Hillary Clinton.” As early as 2014, IRA employees traveled to the United States on intelligence-gathering missions to obtain information and photographs for use in their social media posts. After returning to St. Petersburg, IRA agents began creating and operating social media accounts and group pages which falsely claimed to be controlled by American activists. These accounts addressed divisive political and social issues in America and were designed to attract American audiences. The IRA's operation also included the purchase of political advertisements on social media in the names of American persons and entities.

Once the IRA-controlled accounts established a widespread following, they began organizing and staging political rallies within the United States. According to the Mueller Report, IRA-controlled accounts were used to announce and promote the events. Once potential attendees RSVP’d to the event page, the IRA-controlled account would then message these individuals to ask if they were interested in serving as an “event coordinator.” The IRA then further promoted the event by contacting US media about the event and directing them to speak with the coordinator. After the event, the IRA-controlled accounts posted videos and photographs of the event. Because the IRA is able to acquire unwitting American assets to contribute to the events, there was no need for any IRA employee to be present at the actual event.

Throughout the 2016 election season, several prominent political figures [including President Trump, Donald J. Trump Jr., Eric Trump, Kellyanne Conway, and Michael Flynn] and various American media outlets responded to, interacted with, or otherwise promoted dozens of tweets, posts, and other political content created by the IRA. By the end of the 2016 U.S. election, the IRA had the ability to reach millions of Americans through their social media accounts. The Mueller Report has confirmed the following information with individual social media companies:

  1. Twitter identified 3,814 IRA-controlled accounts that directly contacted an estimated 1.4 million people. In the ten weeks before the 2016 U.S. presidential election, these accounts posted approximately 175,993 tweets.
  2. Facebook identified 470 IRA-controlled accounts who posted more than 80,000 posts that reached as many as 126 million persons. IRA also paid for 3,500 advertisements.
  3. Instagram identified 170 IRA-controlled accounts that posted approximately 120,000 pieces of content.

Since the details of the IRA’s social media campaign were publicized, big tech companies have been subject to heightened levels of scrutiny regarding their effort to combat misinformation and other foreign interference in American elections. However, many members of Congress were pushing for wide-ranging social media reform even before the release of the Mueller Report.

In April 2018, Facebook Founder and CEO Mark Zuckerberg testified over a two-day period during a joint session of the Senate Commerce and Judiciary Committees and the House Energy and Commerce Committee. These hearings were prompted by the Cambridge Analytica scandal. Cambridge Analytica—a political consulting firm with links to the Trump campaign—harvested the data of an estimated 87 million Facebook users to psychologically profile voters during the 2016 election. Zuckerberg explained that, when functioning properly, Facebook is supposed to collect users’ information so that their advertisements can be tailored to a specific group of people that the third party wishes to target as part of their advertising strategy. In this scenario, the third-parties never receive any Facebook users’ data. However, Cambridge Analytica utilized a loophole in Facebook’s Application Programming Interface (API) that allowed the firm to obtain users’ data after the users accessed a quiz called “thisismydigitallife.” The quiz was created by Aleksandr Kogan, a Russian American who worked at the University of Cambridge. Zuckerberg explained to members of Congress that what Cambridge Analytica was improper, but also admitted that Facebook made a serious mistake in trusting Cambridge Analytica when the firm told Facebook it was not using the data it had collected through the quiz.

Another high-profile hearing occurred on September 5, 2018 when Twitter Co-Founder and CEO Jack Dorsey was called to testify before the Senate Intelligence Committee to discuss foreign influence operations on social media platforms. During this hearing, Dorsey discussed Twitter’s algorithm that prevents the circulation of Tweets that violate the platform’s Terms of Service, including the malicious behavior we saw in the 2016 election. Dorsey also discussed Twitter’s retrospective review of IRA-controlled accounts and how the information gathered is being utilized to quickly identify malicious automated accounts, a tool that the IRA relied heavily on prior to the 2016 election. Lastly, Dorsey briefed the committee on Twitter’s suspicion that other countries—namely Iran—may be launching their own social media campaigns.

With the 2020 election quickly approaching, these social media executives are under pressure to prevent their platform from being abused in the election process. Likewise, the calls for elected officials to increase regulation of social media platforms are growing stronger by the day, especially since Gates joined the conversation.

[Sources: Mueller Report, PBS, Washington Post, CNN, The Guardian, Vox I, Vox II]

Blog Search

Blog Archive

Categories