The Free Internet Project

content moderation

What is Parler? An "unbiased social media"? Or platform for conservative Republicans?

Parler (French for "to talk") is a social media plafform started in 2018. Its mission is to be “an unbiased social media focused on real user experiences and engagement." It is touted as an alternative to Twitter that allows users to post content and comment like Twitter--without political bias. Many Republican politicians who believe Twitter is biased against conservatives, have migrated to Parler and are promoting it as a platform. Since Twitter and Snapchat recently moderated some of Donald Trump’s posts that have violated their community standards, conservative Republicans have switched to Parler. Ted Cruz joined Parler as did three Republican politicians Jim Jordan, Elise Stefanik and Nikki Haley, as CNBC reported. Parler may become Republican lawmaker’s and Trump's favorite social media site. Trump’s campaign manager Brad Parscale accused Twitter and Facebook for biased censorship and stated that the campaign team may select an alternative platform, such as Parlor, as reported by the Wall Street Journal. Parler ranked top news app in Apple’s app store and has 1.5 million users in 2020. By comparison, Twitter has over 145 million active users

Content moderation by Internet platforms has become a hot-button issue. In the past, platforms took permissive approaches in the name of free speech, but they soon realized the need to moderate some objectionable content posted by their user. Most people would agree with the idea that despite the importance of free expression and free flow of information, allowing everyone to post anything online may lead to false, illegal, and harmful content being shared. So Internet companies must exercise some moderation of user content, but the unsolved puzzle is: what the standards should be and who should decide them.

Touted by Republicans, Parler attracted many new users in the past few days. However, some users realized that the new hyped platform was not free of content moderation. Besides restraining the commonly prohibitive content outlined in Parler’s Community Guidelines such as spam, fighting words, pornography and criminal solicitation, Parler also makes clear in its User Agreement: “Parler may remove any content and stop your access to the Services at any time and for any reason or no reason, although Parler endeavors to allow all free speech that is lawful and does not infringe the legal rights of others … Although the Parler Guidelines provide guidance to you regarding content that is not proper, Parler is free to remove content and terminate your access to the Services even where the Guidelines have been followed.”

Some users who are liberals were reportedly banned from Parler. Techdirt compiled some of the user who were banned. Parler's banning of liberal users does not appear to be consistent with its motto as an "unbiased social media."  Even some conservative commentators criticized Parler for not abiding by its privacy policy as it asked for a driver's license from its users. The goal of a politically unbiased Internet platform may be a worthy one. But it remains to be seen whether Parler provides such a space. 

--written by Candice Wang

 

Over 130 Companies Remove Ads from Facebook in #StopHateforProfit Boycott, forcing Mark Zuckerberg to change lax Facebook policy on misinformation and hate content

In the aftermath of Cambridge Analytica scandal in which the company exploited Facebook to target and manipulate swing voters in the 2016 U.S. election, Facebook did an internal review to examine the company's role in spreading misinformation and fake news that may have affected the election, as CEO Mark Zuckerberg announced. In 2018, Zuckerberg announced that Facebook was making changes to be better prepared to stop misinformation in the 2020 election. Critics criticized the changes as modest, however. As WSJ reporters Jeff Horwitz and Deepa Seetharaman detailed, Facebook executives largely rejected the internal study's recommendations to reduce polarization on Facebook. Doing so might be "paternalistic" and might open Facebook up to criticisms of being biased against conservatives.

Despite the concerns about fake news and misinformation affecting the 2020 election, Facebook took the position that fact checking for misinformation did not apply to the posts and ads by politicians in the same way as they applied to everyone else. Facebook's policy was even more permissive to political ads and politicians. As shown below, Facebook justified this hands-off position as advancing political speech: "Our approach is grounded in Facebook's fundamental belief in free expression, respect for the democratic process, and the belief that, especially in mature democracies with a free press, political speech is the most scrutinized speech there is. Just as critically, by limiting political speech we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words."

Facebook's Fact-Checking Exception for Politicians and Political Ads

By contrast, Twitter CEO Jack Dorsey decided to ban political ads in 2019 and to monitor the content politicians just as Twitter does with all other users for misinformation and other violations of Twitter's policy.  Yet Zuckerberg persisted in his "hands off" approach: "“I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online." Zuckerberg even said Twitter was wrong to add warnings to two of President Trump's tweets as misleading (regarding mail-in ballots) and glorifying violence (Trump said, "When the looting starts, the shooting starts" regarding the protests of the Minneapolis police Derek Chauvin killing of George Floyd)  Back in October 2019, Zuckerberg defended his approach in the face of withering questioning by Rep. Alexandria Ocasio-Cortez. 

 

In May and June 2020, Zuckerberg persisted in his "hands off" approach. Some Facebook employees quit in protest, while others staged a walkout.  Yet Zuckerberg still persisted. 

On June 17, 2020, Color of Change, which is "the nation’s largest online racial justice organization," organized with NAACP, Anti-Defamation League, Sleeping Giants, Free Press, and Common Sense Media a boycott of advertising on Facebook for the month of July. The boycott was labeled #StopHateforProfit. Within just 10 days, over 130 companies joined the ad boycott of Facebook.  It included many large companies, such as Ben and Jerry's, Coca-Cola, Dockers, Eddie Bauer, Levi's, The North Face, REI, Unilver, and Verizon. 

On June 26, 2020, Zuckerberg finally announced some changes to Facebook's policy.  The biggest changes:

(1) Moderating hateful content in ads. As Zuckerberg explained on his Facebook page, "We already restrict certain types of content in ads that we allow in regular posts, but we want to do more to prohibit the kind of divisive and inflammatory language that has been used to sow discord. So today we're prohibiting a wider category of hateful content in ads. Specifically, we're expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others. We're also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them."

(2) Adding labels to posts, including from candidates, that may violate Facebook's policy. As Zuckerberg explained, "Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

"We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society -- but we'll add a prompt to tell people that the content they're sharing may violate our policies.

"To clarify one point: there is no newsworthiness exemption to content that incites violence or suppresses voting. Even if a politician or government official says it, if we determine that content may lead to violence or deprive people of their right to vote, we will take that content down. Similarly, there are no exceptions for politicians in any of the policies I'm announcing here today." 

Facebook's new labeling of content of candidates sounds very similar to what Zuckerberg criticized Twitter as being wrong. And Facebook's new policy on moderating hateful content in ads that "are a threat to the physical safety, health or survival of others," including "people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status," seems a positive step to prevent Facebook being a platform to sow racial discord, which is a goal of Russian operatives according to U.S. intelligence. 

Facebook new policy on moderation of political ads and posts by politicians and others

The organizers of the boycott, however, were not impressed with Facebook's changes. They issued a statement quoted by NPR: "None of this will be vetted or verified — or make a dent in the problem on the largest social media platform on the planet. We have been down this road before with Facebook. They have made apologies in the past. They have taken meager steps after each catastrophe where their platform played a part. But this has to end now."

 

Trump Campaign Snaps at Being Removed from Snapchat's Discover Page

On June 3, 2020, Snapchat decided to stop promoting the Snapchat account of Donald Trump on its Discover page, which provides a feed of stories from celebrities and other popular profiles that are curated by Snapchat for its users.

Section 230 protections for Internet platforms come under attack in U.S.

Section 230 of the Communications Decency Act [text] was enacted in 1996. Many commentators have hailed Section 230 as giving birth to the explosion of expression, businesses, social media, applications, and user-generated content on the Internet.  The reason is that Section 230 shielded Internet platforms from potentially business-ending liability, while facilitating the development of new applications enabling individuals to publish their own content online.  As Wired's Matt Reynolds puts it, "It is hard to overstate how foundational Section 230 has been for enabling all kinds of online innovations. It’s why Amazon can exist, even when third-party sellers flog Nazi memorabilia and dangerous medical misinformation. It’s why YouTube can exist, even when paedophiles flood the comment sections of videos. And it’s why Facebook can exist even when a terrorist uses the platform to stream the massacre of innocent people. It allows for the removal of all of these bad things, without forcing the platforms to be legally responsible for them." 

More recently, however, Section 230 has become a lightning rod, criticized by the Trump administration and others who disagree with shielding Internet platforms for the potentially unlawful or harmful content posted by their users. The Trump administration and conservative Republicans contend that Twitter and Google, for example, are engaging in biased content moderation that disfavors them for more liberal positions or politicians. Others criticize Section 230 as being too permissive in letting social media companies off the hook, even though there is so much disturbing, if not dangerous, content shared on their platforms. This article explains Section 230 and then the recent criticisms that the Trump administration and others have raised....

Pages

Blog Search

Blog Archive

Categories