The Free Internet Project

Blog

Summary: Mounting Allegations Facebook, Zuckerberg Have Political Bias and Favoritism for Trump and conservatives in content moderation

In the past week, more allegations surfaced that Facebook executives have been intervening in questionable ways in the company's content moderation procedure that show favoritism to Donald Trump, Breitbart, and other conservatives.  Below is a summary of the main allegations that Facebook has been politically biased in favor of Trump and conservatives.  This page will be updated if more allegations are reported.

1. Olivia Solon, Sensitive to claims of bias, Facebook relaxed misinformation rules for conservative pages, NBC News (Aug. 7, 2020, 2:31 PM): "The list and descriptions of the escalations, leaked to NBC News, showed that Facebook employees in the misinformation escalations team, with direct oversight from company leadership, deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the last six months. The discussions of the reviews showed that Facebook employees were worried that complaints about Facebook's fact-checking could go public and fuel allegations that the social network was biased against conservatives. The removal of the strikes has furthered concerns from some current and former employees that the company routinely relaxes its rules for conservative pages over fears about accusations of bias."

2. Craig Silverman, Facebook Fired an Employee Who Collected Evidence of Right-Wing Page Getting Preferential Treatment, Buzzfeed (Aug. 6, 2020, 4:13 PM): "[S]ome of Facebook’s own employees gathered evidence they say shows Breitbart — along with other right-wing outlets and figures including Turning Point USA founder Charlie Kirk, Trump supporters Diamond and Silk, and conservative video production nonprofit Prager University — has received special treatment that helped it avoid running afoul of company policy. They see it as part of a pattern of preferential treatment for right-wing publishers and pages, many of which have alleged that the social network is biased against conservatives." Further: "Individuals that spoke out about the apparent special treatment of right-wing pages have also faced consequences. In one case, a senior Facebook engineer collected multiple instances of conservative figures receiving unique help from Facebook employees, including those on the policy team, to remove fact-checks on their content. His July post was removed because it violated the company’s 'respectful communication policy.'”

3. Ryan Mac, Instagram Displayed Negative Related Hashtags for Biden, but Hid them for Trump, Buzzfeed (Aug. 5, 2020, 12:17 PM): "For at least the last two months, a key Instagram feature, which algorithmically pushes users toward supposedly related content, has been treating hashtags associated with President Donald Trump and presumptive Democratic presidential nominee Joe Biden in very different ways. Searches for Biden also return a variety of pro-Trump messages, while searches for Trump-related topics only returned the specific hashtags, like #MAGA or #Trump — which means searches for Biden-related hashtags also return counter-messaging, while those for Trump do not."

4. Ryan Mac & Craig Silverman, "Hurting People at Scale": Facebook's Employees Reckon with the Social Network They've Built, Buzzfeed (July 23, 2020, 12:59 PM): Yaël Eisenstat, Facebook's former election ads integrity lead "said the company’s policy team in Washington, DC, led by Joel Kaplan, sought to unduly influence decisions made by her team, and the company’s recent failure to take appropriate action on posts from President Trump shows employees are right to be upset and concerned."

5. Elizabeth Dwoskin, Craig Timberg, & Tony Romm, Zuckerberg once wanted to sanction Trump. Then Facebook wrote rules that accommodated him., Wash. Post (June 28, 2020, 6:25 PM): "But that started to change in 2015, as Trump’s candidacy picked up speed. In December of that year, he posted a video in which he said he wanted to ban all Muslims from entering the United States. The video went viral on Facebook and was an early indication of the tone of his candidacy....Ultimately, Zuckerberg was talked out of his desire to remove the post in part by Kaplan, according to the people. Instead, the executives created an allowance that newsworthy political discourse would be taken into account when making decisions about whether posts violated community guidelines....In spring of 2016, Zuckerberg was also talked out of his desire to write a post specifically condemning Trump for his calls to build a wall between the United States and Mexico, after advisers in Washington warned it could look like choosing sides, according to Dex Torricke-Barton, one of Zuckerberg’s former speechwriters."  

Regarding election interference: "Facebook’s security engineers in December 2016 presented findings from a broad internal investigation, known as Project P, to senior leadership on how false and misleading news reports spread so virally during the election. When Facebook’s security team highlighted dozens of pages that had peddled false news reports, senior leaders in Washington, including Kaplan, opposed shutting them down immediately, arguing that doing so would disproportionately impact conservatives, according to people familiar with the company’s thinking. Ultimately, the company shut down far fewer pages than were originally proposed while it began developing a policy to handle these issues."

6. Craig Timberg, How conservatives learned to wield power inside Facebook, Wash. Post (Feb. 20, 2020, 1:20 PM): "In a world of perfect neutrality, which Facebook espouses as its goal, the political tilt of the pages shouldn’t have mattered. But in a videoconference between Facebook’s Washington office and its Silicon Valley headquarters in December 2016, the company’s most senior Republican, Joel Kaplan, voiced concerns that would become familiar to those within the company. 'We can’t remove all of it because it will disproportionately affect conservatives,; said Kaplan, a former George W. Bush White House official and now the head of Facebook’s Washington office, according to people familiar with the meeting who spoke on the condition of anonymity to protect professional relationships."

Related articles about Facebook's role in the 2016 U.S. Election

7. Deepa Seetharaman, How a Facebook Employee Helped Trump Win--But Switched Sides for 2020, Wall St. J (Nov. 24, 2019, 3:18 PM): "One of the first things Mr. Barnes and his team advised campaign officials to do was to start running fundraising ads targeting Facebook users who liked or commented on Mr. Trump’s posts over the past month, using a product now called 'engagement custom audiences.' The product, which Mr. Barnes hand-coded, was available to a small group, including Republican and Democratic political clients. (The ad tool was rolled out widely around Election Day.) Within the first few days, every dollar that the Trump campaign spent on these ads yielded $2 to $3 in contributions, said Mr. Barnes, who added that the campaign raised millions of dollars in those first few days. Mr. Barnes frequently flew to Texas, sometimes staying for four days at a time and logging 12-hour days. By July, he says, he was solely focused on the Trump campaign. When on-site in the building that served as the Trump campaign’s digital headquarters in San Antonio, he sometimes sat a few feet from Mr. Parscale. The intense pace reflected Trump officials’ full embrace of Facebook’s platform, in the absence of a more traditional campaign structure including donor files and massive email databases."

Claiming "national emergency," Trump Issues Executive Order Banning US transactions with TikTok and WeChat

Late on Thursday, Aug. 6, 2020, Donald Trump issued two executive orders, one against TikTok and the other against Tencent's messaging platform WeChat.  Claiming a "national emergency," Trump invoked the authority of the "President by the Constitution and the laws of the United States of America, including the International Emergency Economic Powers Act (50 U.S.C. 1701 et seq.) (IEEPA), the National Emergencies Act (50 U.S.C. 1601 et seq.), and section 301 of title 3, United States Code." For a good summary of the International Emergency Economic Powers Act, read Anupam Chander's recent Washington Post op-ed and this NPR interview with Elizabeth Goitein.

The Executive Orders prohibits, "to the exent permitted under applicable law," any transactions with "ByteDance Ltd. (a.k.a. Zìjié Tiàodòng), Beijing, China, or its subsidiaries, in which any such company has any interest" (ByteDance owns TikTok) and with WeChat in 45 days.  The Secretary of Commerce is to identify the transactions prohibited by the order 45 days after the date of the order.  The Excecutive Order also prohibits "[a]ny transaction by a United States person or within the United States that evades or avoids, has the purpose of evading or avoiding, causes a violation of, or attempts to violate the prohibition." To justify this emergency action, Trump claimed the following charges on Bytedance and TikTok:

I, DONALD J. TRUMP, President of the United States of America, find that additional steps must be taken to deal with the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain).  Specifically, the spread in the United States of mobile applications developed and owned by companies in the People’s Republic of China (China) continues to threaten the national security, foreign policy, and economy of the United States.  At this time, action must be taken to address the threat posed by one mobile application in particular, TikTok.

TikTok, a video-sharing mobile application owned by the Chinese company ByteDance Ltd., has reportedly been downloaded over 175 million times in the United States and over one billion times globally.  TikTok automatically captures vast swaths of information from its users, including Internet and other network activity information such as location data and browsing and search histories.  This data collection threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information — potentially allowing China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.

TikTok also reportedly censors content that the Chinese Communist Party deems politically sensitive, such as content concerning protests in Hong Kong and China’s treatment of Uyghurs and other Muslim minorities.  This mobile application may also be used for disinformation campaigns that benefit the Chinese Communist Party, such as when TikTok videos spread debunked conspiracy theories about the origins of the 2019 Novel Coronavirus.

These risks are real.  The Department of Homeland Security, Transportation Security Administration, and the United States Armed Forces have already banned the use of TikTok on Federal Government phones.  The Government of India recently banned the use of TikTok and other Chinese mobile applications throughout the country; in a statement, India’s Ministry of Electronics and Information Technology asserted that they were “stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers which have locations outside India.”  American companies and organizations have begun banning TikTok on their devices.  The United States must take aggressive action against the owners of TikTok to protect our national security.

Trump made similar charges against WeChat:

WeChat, a messaging, social media, and electronic payment application owned by the Chinese company Tencent Holdings Ltd., reportedly has over one billion users worldwide, including users in the United States.  Like TikTok, WeChat automatically captures vast swaths of information from its users.  This data collection threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information.  In addition, the application captures the personal and proprietary information of Chinese nationals visiting the United States, thereby allowing the Chinese Communist Party a mechanism for keeping tabs on Chinese citizens who may be enjoying the benefits of a free society for the first time in their lives.  For example, in March 2019, a researcher reportedly discovered a Chinese database containing billions of WeChat messages sent from users in not only China but also the United States, Taiwan, South Korea, and Australia.  WeChat, like TikTok, also reportedly censors content that the Chinese Communist Party deems politically sensitive and may also be used for disinformation campaigns that benefit the Chinese Communist Party.  These risks have led other countries, including Australia and India, to begin restricting or banning the use of WeChat.  The United States must take aggressive action against the owner of WeChat to protect our national security.

In a company blog post, TikTok said: "We will pursue all remedies available to us in order to ensure that the rule of law is not discarded and that our company and our users are treated fairly – if not by the Administration, then by the US courts." TikTok also called upon its 100 users in the U.S. to make their voices heard in the White House: "We want the 100 million Americans who love our platform because it is your home for expression, entertainment, and connection to know: TikTok has never, and will never, waver in our commitment to you. We prioritize your safety, security, and the trust of our community – always. As TikTok users, creators, partners, and family, you have the right to express your opinions to your elected representatives, including the White House. You have the right to be heard."

In a report by CNN, a Tencent spokesperson said it it reviewing the Executive Order. There is some confusion on the scope of the Executive Order, which names any transactions with "Tencent Holdings" (not just WeChat). Tencent is a massive global conglomerate with many products and services (e.g., videogames by Riot Games, such as "League of Legends"), not just WeChat. A White House representative later confirmed to the LA Times that the order only applies to WeChat, not all of Tencent.

Meanwhile, according to the Wall St. Journal, bills have passed in the House and Senate that, if enacted, would ban federal employees from using TikTok on government devices. To pass, Congress would have to agree on the same bill.

Facebook removes Donald Trump post regarding children "almost immune" for violating rules on COVID misinformation; Twitter temporarily suspends Trump campaign account for same COVID misinformation

On August 5, 2020, as reported by the Wall St. Journal, Facebook removed a post from Donald Trump that contained a video of an interview he did with Fox News in which he reportedly said that children are "almost immune from this disease." Trump also said COVID-19 “is going to go away,” and that “schools should open” because “this it will go away like things go away.” A Facebook spokesperson explained to the Verge: "This video includes false claims that a group of people is immune from COVID-19 which is a violation of our policies around harmful COVID misinformation." 

Twitter temporarily suspended the @TeamTrump campaign account from tweeting because of the same content. "The @TeamTrump Tweet you referenced is in violation of the Twitter Rules on COVID-19 misinformation,” Twitter spokesperson Aly Pavela said in a statement to TechCrunch. “The account owner will be required to remove the Tweet before they can Tweet again.” The Trump campaign resumed tweeting so it appears it complied and removed the tweet. 

Neither Facebook nor Twitter provided much explanation of their decisions on their platforms, at least based on our search. They likely interpreted "almost immune from this disease" as misleading because children of every age can be infected by coronavirus and suffer adverse effects, including death (e.g., 6 year old, 9 year old, and 11 year old). In Florida, 23,170 minors tested positive for coronavirus by July 2020, for example. The CDC just published a study on the spread of coronavirus among children at summer camp in Georgia and found extensive infection spread among the children: 

These findings demonstrate that SARS-CoV-2 spread efficiently in a youth-centric overnight setting, resulting in high attack rates among persons in all age groups, despite efforts by camp officials to implement most recommended strategies to prevent transmission. Asymptomatic infection was common and potentially contributed to undetected transmission, as has been previously reported (1–4). This investigation adds to the body of evidence demonstrating that children of all ages are susceptible to SARS-CoV-2 infection (1–3) and, contrary to early reports (5,6), might play an important role in transmission (7,8). 

Experts around the world are conducting studies to learn more about how COVID-19 affects children.  The Smithsonian Magazine compiles a summary of the some of these studies and is well worth reading.  One of the studies from the Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine did examine the hypothesis: "Decreased susceptibility could result from immune cross-protection from other coronaviruses9,10,11, or from non-specific protection resulting from recent infection by other respiratory viruses12, which children experience more frequently than adults." But the study noted: "Direct evidence for decreased susceptibility to SARS-CoV-2 in children has been mixed, but if true could result in lower transmission in the population overall." This inquiry was undertaken because, thus far, children have reported fewer positive tests than adults. According to the Mayo Clinic Staff: "Children of all ages can become ill with coronavirus disease 2019 (COVID-19). But most kids who are infected typically don't become as sick as adults and some might not show any symptoms at all." Moreover, a study from researchers in Berlin found that children "carried the same viral load, a signal of infectiousness." The Smithsonian Magazine article underscores that experts believe more data and studies are needed to understand how COVID-19 affects children.

Speaking of the Facebook removal, Courtney Parella, a spokesperson for the Trump campaign, said: "The President was stating a fact that children are less susceptible to the coronavirus. Another day, another display of Silicon Valley's flagrant bias against this President, where the rules are only enforced in one direction. Social media companies are not the arbiters of truth."

China enacts controversial "national security law" for Hong Kong, with greater power against protesters in Hong Kong

 

On June 30, 2020, China engaged in a secret process to enact a new national security law that would significantly impact the way Hong Kong uses the Internet, as reported by Forbes. Hong Kong’s chief executive, Carrie Lam, was not allowed to see a draft of the law before its passage. China has justified the law as a way to safeguard Hong Kong's economic development and political stability. The law prevents and punishes any act that would put China’s national security at risk, including secession, terrorism, subversion, and collusion with foreign forces. The vagueness of the four crimes means that they may be used broadly to silence any dissent or protesters in Hong Kong, including in content posted on social media, against China’s rule, according to NPR. Margaret Lewis, a law professor at Seton Hall Law School and a specialist on Hong Kong and Taiwan told NPR, “What we do know is that Beijing now has an efficient, official tool for silencing critics who step foot in Hong Kong." 

The national security law is written very broadly. The law penalizes even offenses committed “outside the region by a person who is not a permanent resident of the region.” The text of the law seems to purport extraterritorial application to any person in the world who writes something in violation of the law, including Americans. If someone violates this new national security law, they would be facing a penalty of a life sentence in prison, according to Forbes. What’s more concerning is the new law authorizes China to set up a "National Security Committee" to oversee the investigation and prosecution of any violations without any judicial review. Michael C. Davis, a fellow at the Wilson Center, told NPR: “With this law being superior to all local law and the Basic Law (Hong Kong's constitution) itself, there is no avenue to challenge the vague definitions of the four crimes in the law as violating basic rights.” 

Logistically, cases classified as “complex” or “serious” will be tried in mainland Chinese courts by Chinese judges, according to NPR. This would push aside Hong Kong’s judicial system, waive trial by jury, and deny public access to the trial if the case contains sensitive information. One fear is that people arrested in Hong Kong would be extradited to China to face trial. 

Foreign tech firms fear that Beijing’s law will severely control or restrict the Internet that have remarkably shaped and helped Hong Kong’s growth, as Forbes reports. Just within hours of the law being passed, two opposition political parties in Hong Kong announced they were voluntarily disbanding. People in Hong Kong have been deleting their social media accounts in fear that their speech could be considered “subversive” or “secessionist”. While Twitter has refused to comment on Hong Kong users dropping off of the social platform, there have been multiple public sign-offs from some top pro-democracy figures in Hong Kong. The BBC reported on July 30, 2020 that four students in Hong Kong have been arrested for "inciting secession" on social media. 

 

Some people in Hong Kong may try to protect their identity on social media accounts by using VPNs. In comparison to June 2020, there’s been a 321% spike in VPN’s (virtual private networks) in July. A VPN conceals a user’s online activity.  One Hong Kong protester told Fortune that he downloaded a VPN after the announcement of the new law because he was “really afraid that the [Chinese Communist Party] will get my personal information.” Police have already made a handful of arrests including one man who displayed a flag advocating for Hong Kong’s independence from China. Yet, platforms like LIHKG (similar to the online messaging board – Reddit), are still active with protesters expressing their anti-government criticisms.

--written by Alfa Alemayehu

 

 

Facebook and Instagram Studying If Racial Bias in Their Algorithms After Years of Ignoring Issue

Facebook announced it will create teams to study if there is any racial bias within Facebook's and Instagram's algorithms that negatively impact the experience of its minority users experience within the social media platforms. The Equity and Inclusion team at Instagram and the and Inclusivity Product Team for Facebook will tackle a large issue that Facebook has largely ignored in the past. Facebook is under intense scrutiny.  Since July 2020, Facebook has faced a massive advertising boycott called Stop Hate for Profit from over five hundred companies such as Coca-Cola, Disney, and Unilever. Facebook has been criticized for a lack of initiative in handling hate speech and attempts to sow racial discord on their platforms, including to suppress Black voters. An independent audit by civil rights experts found the prevalence of hate speech targeting Blacks, Jews, and Muslims on Facebook "especially acute." “The racial justice movement is a moment of real significance for our company,” said Vishal Shah, Instagram’s product director told the Wall Street Journal. “Any bias in our systems and policies runs counter to providing a platform for everyone to express themselves.” 

The new research teams will cover what has been was blind spot for Facebook. ln 2019, Facebook employees found that a computer automated moderation algorithm on Instagram was 50 percent more likely to suspend the accounts of Black users compared to white users, according to the Wall Street Journal. This finding was supported by user complaints to the company. After employees reported these findings, they were sworn to secrecy and no further research on the algorithm was done by Facebook. Ultimately, the algorithm was changed, but it was not tested any further for racial bias. Facebook officially stated that that research was stopped because an improper methodology was being applied at the time. As reported by NBC News, employees of Facebook leaked that the automated moderation algorithm automatically detects and deletes hate-speech against white users more effectively then it moderates hate speech against black users.  

Facebook's announcement of these teams to study racial bias in these social media platforms are only in the infancy stage. The Instagram Equity and Inclusion team does not have a team leader announced yet. The Inclusivity Product Team will supposedly work closely with a group of Black users and cultural experts to make effective changes. However, Facebook employees previously working on this issue have stated anonymously that they were ignored and discouraged to continue their work. The culture of Facebook as a company and previous inaction to address racial issues have raised skepticism of Facebook's recent initiatives. Time will tell if Facebook is serious about the problem.  

--written by Sean Liu  

 

Cleaning house: Twitter suspends 7,000 accounts of QAnon conspiracy theory supporters

On July 21, 2020, Twitter suspended 7,000 accounts spreading QAnon conspiracy theories. In a tweet about the banning of these QAnon accounts, Twitter reiterated their commitment to taking "strong enforcement actions on behavior that has the potential to lead to offline harm." Twitter identified the QAnon accounts' violations of its community standards against "multi-account[s]," "coordinating abuse around individual victims," and "evad[ing] a previous suspension." In addition to the permanent suspensions, Twitter also felt it necessary to ban content and accounts "associated with Qanon" from the Trends and recommendations on Twitter, as well as to avoid "highlighting this activity in search and conversations." Further, Twitter will block "URLs associated with QAnon from being shared on Twitter." 

These actions by Twitter are a bold step in what has been a highly contentious area concerning the role of social media platforms in moderating hateful or harmful content. Some critics suggested that Twitter's QAnon decision lacked notice and transparency.  Other critics contended that Twitter's actions were too little to stop the "omnisconpiracy theory" that QAnon has become across multiple platforms.

So what exactly is QAnon? CNN describes the origins of QAnon, which began as a single conspiracy theory: QAnon "claim dozens of politicians and A-list celebrities work in tandem with governments around the globe to engage in child sex abuse. Followers also believe there is a 'deep state' effort to annihilate President Donald Trump."  Forbes similarly describes: "Followers of the far-right QAnon conspiracy believe a “deep state” of federal bureaucrats, Democratic politicians and Hollywood celebrities are plotting against President Trump and his supporters while also running an international sex-trafficking ring." In 2019, an internal FBI memo reportedly identified QAnon as a domestic terrorism threat.

Followers of QAnon are also active on Facebook, Reddit, and YouTube. The New York Times reported that Facebook was considering takeingsteps to limit the reach QAnon content had on its platform. Facebook is coordinating with Twitter and other platforms in considering its decision; an announcement is expected in the next month. Facebook has long been criticized for its response, or lack of response, to disinformation being spread on its platform. Facebook is now the subject of a boycott, Stop Hate for Profit, calling for a stop to advertising until steps are taken to halt the spread of disinformation on the social media juggernaut. Facebook continues to allow political ads using these conspiracies on its site. Forbes reports that although Facebook has seemingly tried to take steps to remove pages containing conspiracy theories, a number of pages still remain. Since 2019, Facebook has allowed 144 ads promoting QAnon on its platform, according to Media Matters. Facebook has continuously provided a platform for extremist content; it even allowed white nationalist content until officially banning it in March 2019.

Twitter's crack down on QAnon is a step in the right direction, but it signals how little companies like Twitter and Facebook have done to stop disinformation and pernicious conspiracy theories in the past. As conspiracy theories can undermine effective public health campaigns to stop the spread of the coronavirus and foreign interference can undermine elections, social media companies appear to be playing a game of catch-up.  Social media companies would be well-served by devoting even greater resources to the problem, with more staff and clearer articulation of its policies and enforcement procedures. In the era of holding platforms and individuals accountable for actions that spread hate, social media companies now appear to realize that they have greater responsibilities for what happens on their platforms.

--written by Bisola Oni

The Twitter Hack: What Preliminary Investigations Have Revealed

What happened: Hackers accessed a slew of Twitter accounts to sell; took control of high-profile accounts to Tweet links to a Bitcoin scam.

In a recent blog post, Twitter admitted that its platform was hacked last Wednesday, July 15, 2020. Twitter alleged hackers engaged in a “social engineering scheme” to access its internal tools. Twitter defined “social engineering” as “the intentional manipulation of people into performing certain actions and giving out their personal information.”

Ultimately, hackers accessed 130 Twitter accounts. The hackers were able to reset the password for 45 accounts; they then logged into those accounts and Tweeted out cryptocurrency "bitcoin" scams. The hacking scheme escalated just before 3:30 p.m. on July 15, 2020. According to a New York Times’ investigation, certain cryptocurrency company elites’ accounts began asking for Bitcoin donations to a website called “cryptoforhealth.” The Bitcoin wallet set up to receive the donations was none other than the wallet “Kirk” had been using all day. “Kirk” then started tweeting out links from celebrities’ and tech giants’ accounts which told users to send money to a Bitcoin account and in return, the amount would be doubled.

According to one investigation by Krebs, the Bitcoin account processed 383 transactions; according to NYT, 518 transactions were processed worldwide. It wasn’t until around 6 p.m. when Twitter put a stop to the scam messages. Twitter’s blog post stated: “We’re embarrassed, we’re disappointed, and more than anything, we’re sorry.” Once the hacks were detected, Twitter “secured and revoked access to internal systems,” restricted the functionality of many Twitter accounts – preventing Tweeting and password changes, and locked accounts when there was a recent password change.

What was accessed?

Twitter assured its users for all but the 130 hacked accounts, no personal information was compromised. However, it is likely the hackers saw the users’ personal information, like phone numbers, email addresses. For the 45 accounts that were taken over by the hackers, more information was compromised – but Twitter did not state what information that could be. The hackers downloaded user's information, such as a summary of the user’s activity and account details, for eight accounts. It is unclear which eight accounts were affected at this time.

Investigators are trying to identify the hackers – foreign state interference is not suspected.

Investigators are trying to figure out if a Twitter employee was involved or whether, as Twitter claimed, the hacking was orchestrated by social engineering, where one individual posed as a trusted employee to gain credentials and account access. The Federal Bureau of Investigation said, "the accounts appear to have been compromised in order to perpetuate cryptocurrency fraud.”  U.S. senators have demanded Twitter submit a brief by July 23, 2020. New York Governor Andrew Cuomo announced the state will conduct a full investigation.  

According to an exclusive New York Times interview with four of the culprits, the organized hacking scheme was not politically motivated, despite targeting some political and corporate elites. The New York Times verified the hackers’ identities – “lol,” “ever so anxious,” and two others – through matching their social media and cryptocurrency accounts. The hackers also provided photos of their chat logs. Another source Krebs identified another key player in the Twitter Hack “PlugWalkJoe.” Investigators have confirmed some of the information relayed to the New York Times’ exclusive interview. “lol” is a 20-something, living on the United States’ West Coast. “ever so anxious” is 19, living with his mother in the South of England. Both are well-known gamers on OGusers.com. “PlugWalkJoe,” whose name is Joseph O’Connor, is 21, British, and was in Spain when the Twitter hack scheme started. Mr. O’Connor insists he played no part in Wednesday’s events. Alternatively, “Kirk” was unknown before Wednesday’s Twitter Hack – and his real identity is still under investigation.

The scheme began with messages the previous Tuesday night between two hackers, “Kirk” and “lol.” “Kirk” reached out to “lol,” alleging he worked at Twitter and demonstrated he could take control of valuable Twitter accounts. The hackers claim they were not part of a foreign interference plot – they are a bunch of young people, one still living with his mom – obsessed with owning early or unusual user names having one letter or number, such as @y or @6.  But “lol” told the New York Times he suspected “Kirk” did not work at Twitter because he was “too willing to damage the company.”

Regardless, “Kirk” could take control of almost any Twitter account, including former President Obama, former Vice President and the Democratic presidential nominee, Joseph R. Biden, Elon Musk,  and other celebrities. The BBC reported that other elites’ accounts were hacked too, like Bill Gates, Kim Kardashian, Kanye West, Apple, and Uber. Another source Krebs adds Jeff Bezos, former New York Mayor Michael Bloomberg, and Warren Buffett to the list.

Prestige is King – Four hackers were inspired by an obsession with “OG user names.”

According to the hackers, “Kirk” directed the group’s efforts. However, two hackers, “lol” and “ever so anxious,” told the New York Times they sought the prestige of owning an original user name. The two claim they only helped “Kirk” by facilitating the purchases and takeovers of OG, or “original gangster,” user names earlier Wednesday. In their interview, the four hackers insisted they parted ways with “Kirk” before he started taking over higher-profile accounts. In the online gaming world, certain user names associated with the launch of a new online platform – so-called OG user names – are highly desired. These prestigious user names are snagged by the earliest users of the new platform. Many latecomers to the platform want the credibility of the OG user names, and will often pay big bucks to get one.

Wednesday’s hacking scheme began with a plan to commandeer and sell OG user names. “Kirk” asked “lol” and “ever so anxious” to act as middlemen for the sale of some Twitter OG user names. “Kirk” promised the other two would get a cut of each transaction they secured. For example, the first “deal” “lol” brokered included a person offering $1500 in Bitcoin for the “@y” user name. The group posted an advertisement on OGusers.com and customers poured in. The group sold user names like @b, @dark, @l, @R9, @vague, @w, @6, and @50. One buyer, and possible culprit, “PlugWalkJoe,” bought the “@6” user name from “ever so anxious,” while “ever so anxious” commandeered the user name “@anxious.” Nearly all the transactions that occurred in relation to the Twitter Hack went into one Bitcoin wallet, predominately used by “Kirk” throughout the day.

Election Day 2020 Concerns

Because high-profile politicians’ accounts were compromised in Wednesday’s Twitter Hack, many express concerns about potential disinformation campaigns closer to November 3rd. These concerns are exacerbated by the fact Twitter did not detect the hacking scheme for hours after the hacks started. While U.S. and state government officials have sought to protect voting systems against potential hacking, Wednesday’s chaos has shown us that efforts to protect the security of the upcoming presidential election might need renewed attention. The investigations into the Twitter Hack are still ongoing, and many details remain unclear.

written by Allison Hedrick

Schrems II: EU Court of Justice strikes down US-EU "Privacy Shield," which allowed businesses to transfer data despite lower privacy protections in US

 On July 16, 2020, the European Union’s top court, the Court of Justice, struck down the trans-Atlantic data privacy transfer pact in a case called Schrems II. The agreement bewteen the US and EU known as the Privacy Shield, allows businesses to transfer data between the United States and European Union, even though U.S. privacy laws do not meet the higher level of data protection of EU law. Data transfer is essential for businesses that rely on the pact to operate their businesses across the Atlantic. For example, multi-national corporations routlinely obtain shipping consumer data from the EU for further use in the US. The Court of Justice ruled that the transfer of data leaves European citizens exposed to US government surveillance and did not comply with EU data privacy law. The Court explained: "although not requiring a third country to ensure a level of protection identical to that guaranteed in the EU legal order, the term ‘adequate level of protection’ must, as confirmed by recital 104 of that regulation, be understood as requiring the third country in fact to ensure, by reason of its domestic law or its international commitments, a level of protection of fundamental rights and freedoms that is essentially equivalent to that guaranteed within the European Union by virtue of the regulation, read in the light of the Charter."

Companies in the U.S. can work out privacy protections by contract, but such contracts also must comply with EU privacy standards. The Court explained: "the assessment of the level of protection afforded in the context of such a transfer must, in particular, take into consideration both the contractual clauses agreed between the controller or processor established in the European Union and the recipient of the transfer established in the third country concerned and, as regards any access by the public authorities of that third country to the personal data transferred, the relevant aspects of the legal system of that third country, in particular those set out, in a non-exhaustive manner, in Article 45(2) of that regulation."

Ars Technica explains the origins of Privacy Shield and the troubles that have long existed with the agreement. Prior to Privacy Shield being adopted, the agreement governing the sharing of consumer data across the Atlantic was called the Safe Harbor. In 2015, the Safe Harbor was invalidated after being challenged by Maximillian Schrems, an Austrian privacy advocate, because it conflicted with EU law. After the Safe Harbor was struck down by the Court of Justice, EU lawmakers and the US Department of Commerce negotiated the Privacy Shield, which went effect in 2016. But many in the EU questioned its validity and lawfulness.

In Schrems II, the Court of Justice agreed. According to Axios, Schrems complained that the clause in Facebook's data contract was insufficient to protect Europeans from US government surveillance. The Court agreed, ruling that once the data entered the US, it was impossible to adequately ensure the protection of the data.  European citizens would have no redress in the US for violations of the EU standards of privacy. The Privacy Shield did not provide equivalent privacy protection. 

So what happens next? EU and the US officials must negotiate a new data sharing agreement between the EU and the US that can be equivalent to the level of privacy protection in the EU. Tech companies like Google and Facebook have issued assurances that this decision will not affect their operations in Europe because the companies have alternative data-transfer contracts, according to Ars Technica. It remains to be seen whether a new transatlantic data sharing agreement can be reached in a way that comports with EU privacy law.

-written by Bisola Oni

ProPublica, FirstDraft study: Nearly 50% of Top Performing Facebook Posts on Mail-In Ballots Were False or Misleading

ProPublica and FirstDraft conducted a study of "Facebook posts using voting-related keywords — including the terms 'vote by mail,' 'mail-in ballots.' 'voter fraud' and 'stolen elections' — since early April [2020]." According to ProPublica, Donald Trump and conservatives have misprepresented that mail-in voting leads to voter fraud. That assertion has not been substantiated. For example, the Washington Post found states with all-mail elections — Colorado, Oregon and Washington— had only 372 potential irregularities of 14.6 million votes, meaning just or 0.0025%. According to a recent study by Prof. Nicolas Berlibski and others, unsubstantiated claims of voter fraud can negatively affect public confidence in elections. The false claims can significantly undermine the faith of voters, Republican or Democratic, in the electoral process, even when the misinformation is disproved by fact-checks.  

In the study, ProPublica and FirstDraft found numerous posts on Facebook that contained misinformation about mail-in ballots. The study concluded: "Of the top 50 posts, ranked by total interactions, that mentioned voting by mail since April 1, 2020 [to July 2020] contained false or substantially misleading claims about voting, particularly about mail-in ballots." ProPublica identified the popular Facebook posts by using engagement data from CrowdTangle.

Facebook’s community standards state that no one shall post content that contains “[m]isrepresentation of the . . . methods for voting or voter registration or census participation.” Facebook CEO Mark Zuckerberg recently said on his Facebook page in June 2020 that he stands against “anything that incites violence or suppresses voting,” and that the company is “running the largest voting information campaign in American history . . . to connect people with authoritative information about the elections . . . crack down on voter suppression, and fight hate speech.” Facebook reportedly removed more than 100,000 posts from Facebook and Instagram that violated the company's community standard against voter suppression from March to May 2020. As ProPublica reported, California Secretary of State Alex Padilla stated that "Facebook has removed more than 90% of false posts referred to it by VoteSure, a 2018 initiative by the state of California to educate voters and flag misinformation."

However, according to the joint project by ProPublica and First Draft, Facebook is still falling well short in the efforts to stop election misinformation. Facebook fails to take down posts from individual accounts and group pages that contain false claims about mail-in ballots and voter fraud, including some portraying "people of color as the face of voter fraud." 

Facebook is reported to be considering banning political ads in the days before the election, but that hardly touches the core of the truly rampant fraud— misinformation of the public about mail ballots. False claims are far more widespread in posts than ads, according to the ProPublica and FirstDraft study.

--written by Yucheng “Quentin” Cui

Revisiting Reddit's Attempt to Stop "Secondary Infecktion" Misinformation Campaign from Russia

 

Last year, Reddit announced that it banned 61 accounts in relation to a disinformation campaign dubbed “Secondary Infecktion” led by a Russian group.  The campaign group was exposed by Facebook earlier in June 2019 for creating fake news in multiple languages that involved multiple nations, aiming to “divide, discredit, and distract Western countries” through dissemination of fake information such as assassination plans, attacks on Ukraine and its pro-Western government, and disputes between Germany and US. This time, the operation created fake accounts and uploaded “leaked UK documents” on Reddit. A research firm, Graphika Labs inspected the associated accounts and concluded these were linked to Secondary Infecktion based on same grammatical errors and language patterns.

Reddit’s investigation into suspicious accounts started with its users’ report on questionable posts. It then worked with Graphika and soon found a “pattern of coordination” similar to those reported accounts linked to Secondary Infektion, making it “use these accounts to identify additional suspect accounts that were part of the campaign on Reddit.”

As Reddit’s statement wrote, it “encourage[s] users, moderators, and 3rd parties to report things to us as soon as they see them.” This statement reflects how much Reddit depends on its community to help moderate the site. Reddit is a platform that is heavily community-based. It is a collection of forums where users share content and comments on just about anything. To the left of every post, there are two buttons – the upvote and the downvote, which allow users themselves to rate content. The total score of a post is essentially the number of upvotes minus downvotes, making the content’s position on a page based on its score rank.  Basically, a higher score means more visibility.

The voting system is liked by many users because unlike Facebook or Twitter, Reddit is more like a community curated by its users themselves. However, the voting system has its drawbacks. The system can be gamed and manipulated. First, because everyone has certain moderating power, personal beliefs and agendas may get in the way. For example, a person may create several accounts just to downvote a post with which he does not agree. As a result, information may be downgraded by gaming the systemt. Second, there's a risk of content manipulation by coordinated attacks. As the June security report stated, Reddit has been heavily focused on content manipulation around the 2020 elections and ensure minorities’ voices would be heard. Therefore, Reddit worked much on bot detection and malicious viruses. Admins have vast powers, including flagging fake accounts, and can try to ensure diversity of viewpoints and participation.

Reddit could consider changing some of its platform features. As some redditors pointed out, Reddit's “gilding” feature, which is akin to a “super-upvote,” may enable manipulation. Users can gild posts with their Gold Reddit subscription or just buy Reddit coins.  Together with the voting system, gilding may make content manipulation more easy. A malicious operation can just buy coins to promote content as they wish even without creating fake accounts. Offering subscriptions is apparently Reddit’s way to cover its cost and to profit, and subscriptions do offer other privileges such as having an ad-free experience. Nonetheless, if Reddit wants to stop content manipulation, perhaps the company needs to rethink the gilding power. 

--written by Candice Wang

 

Pages

Blog Search

Blog Archive

Categories