Image by Andreas Eldh

Since the advent of social media, the specter of online censorship has loomed.  Last week, Twitter dove headlong into the fray with a swift one-two punch that has some celebrating and others crying foul.  Nine months ago Twitter unveiled plans that would allow the company to block access to certain Twitter accounts on a country-specific level.  The aim of the plan was to limit access to tweets that might break the laws of individual countries.

At the time, the program sparked uproar.  Free speech advocates worried about the limitation of expression, while dissidents and activists living under oppressive regimes worried that this program would be used to silence their voices.  Despite the outcry, Twitter insisted that its commitment to free speech was resolute.

Twitter implemented its program for the first time last Thursday when it blocked access inside Germany to the account of the neo-Nazi group Besseres Hannover (“Better Hanover”).  The group is banned in Germany since the use of Nazi symbols and slogans are a criminal offense.  In accordance with the policy, Twitter acted in response to requests from German officials—having declined to act on six previous such requests according to its biannual transparency report.  While users in Germany simply receive a notification that the account has been blocked, users outside the country can still view the group’s posts.  Germany’s past is the basis for its strict controls of speech like this, but the website (and therefore the account in question) is hosted here in the United States.  This begs the question: which country’s legal norms should be applied for regulation of this type of content?

Not content to spark headlines for only one day, Twitter agreed to remove certain anti-Semitic posts that had been emanating from France the following day.  The posts in question centered on the hashtag “#unbonjuif” (“a good Jew”) and included images from the Holocaust coupled with anti-Semitic jokes.  Rather than receiving requests from French officials, Twitter was alerted to these tweets by several Jewish groups within the country.  Like Germany, France’s freedom of speech is not as potent as in the United States, and one of these Jewish groups threatened to seek an injunction in French courts to suppress these posts.

Twitter’s actions have invited comments from both sides.  Free speech advocates have denounced this as the restraint on free expression they were concerned about at the unveiling of the program.  They took to using the controversial hashtag to denounce Twitter’s decision.  Those on the other side have touted this as the elimination of hate speech and denounced these posts as “a wave of feverish hatred.” To some, this might seem like an easy decision to make, considering the nature of the posts at issue.  Antisemitism and neo-Nazism would qualify as hate speech to even the most zealous of free-speech advocates.  The question going forward should not be whether the blocking of these particular posts is the right decision.  It should be: now that Twitter has opened these floodgates, where exactly will it draw the line between what is acceptable regulation of hateful statements and what is censorship of the valid exercise of free speech?  Other social media outlets have struggled mightily with this question, and I do not believe Twitter will be able to adequately answer it any time soon.

Thomas McFarland

Image Source

Tagged with:
 

4 Responses to Hashtag Censorship?

  1. Amelia says:

    I think Ray makes a good point that Twitter need not allow a comment just because it conforms with the right to free speech in the US. However, from a normative perspective, what seems the most damaging about censorship here is that it is occurring on a country-by-country level. I like the idea that Twitter is a global platform, so it seems to me that each user should have access to the same tweets, regardless of what country they log in from. Simply removing German users from any conversation about Nazis could skew the direction of that conversation and would eliminate an important voice in the chorus.

  2. Ray Rufat says:

    Twitter should not be viewed as a public place it should be viewed as a club. Membership in the club allows you to post tweets about what ever you want. But club membership has rules and these rules include not saying things that are hateful or things that incite violence…or whatever the Twitter club decides its policy should be. Nobody who joins Twitter should expect to have unlimited free speech rights. In fact, the last time I checked I don’t believe you even own your tweets, twitter does. They can choose what to do with them. As the poster above said, you can take your comments somewhere else. Some may see this censorship as bad for business, but in the big scheme of things I think people will accept a bit of censorship in order to protect the integrity of the forum. Nobody wants to see Twitter degenerate into a forum that breeds and endorses hatred.

  3. Jacob Schumer says:

    This highlights the difficulty of operating a globally accessible internet enterprise, like Facebook, Google, Twitter, etc. Google faced similar problems when trying to expand into China. We expect American companies to adhere to American values like Free Speech, but the reality is that very large amounts of Europe, Asia and the Muslim world demand a certain amount of censorship, whether it’s in the name of suppressing hate groups or protecting the government. If Facebook, Google, etc want to go global, they will have to censor. I do not envy their position.

  4. John Lomascolo says:

    I don’t think Twitter should need to draw a line in this case. When it comes to regulating free speech, one of the questions asked by the courts is whether or not the regulation leaves open adequate channels for the expression. In this case, this type of online expression can be achieved through a number of other means. If people REALLY want to post hateful comments, they could try to do it on Facebook, on message boards, on comment sections of other websites, or, they could just start their own blog or their own website. All these options grant people the same ability to share their views with the public. Also, this type of hateful language is exactly the type of sentiment that just heightens tension between nations, cultures, and people, and therefore there is a legitimate interest involved in regulating it. If Twitter doesn’t want hateful comments on their website, I don’t think they should have to allow them, and if people don’t like it they can just take their comments somewhere else.