On August 12, 2016, a white supremacist rally in Charlottesville, Virginia broke ended in tragedy when Nazi sympathizer James Alex Fields, Jr. drove his car into a crowd of counter protesters, killing one woman, Heather Heyer, and injuring nineteen others. The “Unite the Right” rally was organized by members of the Klu Klux Klan, neo-Nazis, and other white nationalist groups to oppose the removal of a General Robert E. Lee statute in Charlottesville’s Emancipation Park.

In the wake of the violence, many Silicon Valley companies began actively combatting hate speech after years of criticism for not doing enough to police racist, violent rhetoric. The Daily Stormer, named by the Southern Poverty Law Center as the “top hate site in America,” was evicted from its web host GoDaddy after releasing a blog post personally attacking Heyer. The site was subsequently removed by its second host provider, Google, as well as Russian and Albanian service providers. To date, the Daily Stormer only exists on the dark web.

Other tech companies were quick to follow suit. Facebook deleted all links to the Daily Stormer’s blog post shared across its site. ApplePay and Paypal banned websites selling white supremacist merchandise, and GoFundMe shut down pages pledging financial support to Fields. Discord, a favorite chat app for gamers, banned the altright.com server and suspended accounts linked to the events in Charlottesville. Additionally, WordPress terminated multiple white supremacist blogs and Spotify removed all “hate bands” from its streaming service. Almost every company who took action against white nationalists cited violations of its user agreement as the basis for terminating service to white supremacists even though, previously, there has been little to no enforcement.

However, at least one CEO who banned white supremacy sites is conflicted about his decision. Matthew Prince, CEO of Cloudfare, has openly questioned whether sites like his who claim to be neutral platforms should have the power to exercise editorial judgment over Internet content. After Daily Stormer supporters claimed that Cloudfare secretly supported their racist ideology, Prince made the decision to cut ties. While Cloudfare has consistently asserted its neutrality and willingness to protect any site, regardless of content or political position, its termination of the Daily Stormer may indicate that the tech industry is beginning to assert its moral authority to moderate content online.

Internet providers like Google, Facebook, and Cloudfare are protected by § 230 of the Communications Decency Act of 1996. Section 230 gives sites that publish third-party content immunity from liability from laws that would normally hold them responsible for what others post. This legal framework allows YouTube users to upload their own videos, Amazon customers to publish product reviews, and Facebook users’ freedom to post on pages across the site without fear of lawsuits. However, this same law has allowed providers hosting white supremacist webpages as well as sites like Backpage.com to wash their hands of criminal activity proliferating on its pages.

While tech companies have historically refused to police hate speech online, these recent actions may indicate a shifting liability landscape online. From the beginning of the internet, these companies have viewed themselves as neutral arbiters of the virtual discourse. With this latest purge of white supremacists, there is growing concern that internet providers in particular should not be so quick to ban unpopular voices from their platforms. Senior staff attorney with the American Civil Liberty Union’s Speech, Privacy & Technology Project, Lee Rowland, framed the argument this way: “We rely on the Internet to hear each other … We should all be very thoughtful before we demand that platforms for hateful speech disappear because it does impoverish our conversation and harm our ability to point to evidence for white supremacy and to counter it.” Sites like Facebook and YouTube possess artificial intelligence technology that can easily pinpoint and shut down suspect pages or groups. While this technology has primarily been used to combat terrorism, it could easily be expanded to root out fringe groups or unpopular organizations.

Others have expressed concern that with the increased regulation of content, internet providers risk eroding the immunity they enjoy under Section 230. Susan Benesch, director of the Dangerous Speech Project, said of tech companies to the Washington Post, “The more they get into the business of policing speech — making subjective decisions about what is offensive and what isn’t — the more they are susceptible to undermining their own immunity and opening themselves to regulation.” Section 230 is the cornerstone of the tech industry and the internet as we know it today, and it’s difficult to imagine an internet without it.

While shutting down terrorism and violent white supremacy online are easy decisions, it’s important to consider who decides what sites remain. Companies like Facebook and Google are public corporations who answer to shareholders and are accountable for their bottom line. The sheer vastness of these social media platforms most certainly make consistent enforcement of their user agreements untenable. Facebook boasts one third of the world population as monthly users while GoDaddy hosts and registers over 71 million websites. When deciding which hate speech to censor, tech giants risk alienating customers across the ideological spectrum and the world. Going forward, Silicon Valley must be careful to work within the protections of the Communications Decency Act while balancing a neutrality concerns with the need to protect the public against violent hate groups and extremism.

Madison C. Crooks

Comments are closed.