The polls were overwhelmed on Tuesday, November 6 with an influx of voters, young and old, ready to make a difference and influence their local communities by voting. But as the date for 2018 midterm elections drew nearer, social media companies like Facebook, Twitter, and Snapchat were overwhelmed with another issue—how their platforms were influencing voters before election day.

The influence social media exerts over the political arena became apparent after the 2016 presidential election. Encouraging peers to vote by sharing voting information on social networks can increase voter turnout, help candidates raise campaign funds, and raise awareness about important social and political issues. However, there is a dark side to the way social media can influence politics that many of us have experienced firsthand. Fake news, misinformation, and targeted manipulation were huge issues that many social media sites were unprepared for in 2016. Many large social media platforms took action after the 2016 election, increasing their security measures, requiring more transparency for political advertisements, and improving their ability to detect fake user accounts. Despite the increased awareness, though, many of the same problems were present online in the months leading up to the 2018 midterms. Fake news circulated about candidates on multiple social media sites, and misinformation spread about the different parties’ platforms in spite of companies’ best efforts to review and assess potential inauthentic material.

Particularly concerning in the lead-up to the 2018 midterm elections was the influx of false advertisements, posts, and tweets encouraging voter suppression in one form or another. “I hear ICE agents will be at polling stations on election day,” one tweet that was taken down stated, an obvious attempt to scare immigrants out of voting. Other examples of posts meant to confuse voters on how, when and how to vote plagued social media companies in the months leading up to the midterms. In North Dakota, an ad discouraging hunters from voting was condemned by the Republican party; at the same time, another ad targeted at young male Democrats suggested that they stay at home on election day to give their women counterparts’ votes more weight. While Twitter, Facebook, and many other social media sites have been more active than ever in trying to tackle misinformation by shutting down fake accounts and taking down false advertisements, the hackers and radical voter groups responsible for these types of messages keep getting smarter.

This abuse of social media platforms has led many people to question whether or not a legal approach would help stop the spread of political misinformation. With the companies themselves equipped with technology and personnel ready to fight election manipulation, would imposing legislation either restricting the content of these sites or increasing their liability be worth the public backlash that most certainly would occur? Companies such as Facebook have already imposed content bans prohibiting posts that deter citizens from voting or spread misinformation, while Twitter has worked tirelessly to deactivate fake accounts in an attempt to thwart hackers from scaring voters away from the polls. While it is not a first amendment violation for a company such as Twitter or Facebook to police online users’ speech because they are private organizations, if the government involved itself through imposing legislative bans on content, it would be a very different story.

One protection that online users have when it comes to political discourse is in the form of Section 230 of the Communications Decency Act of 1996, a piece of legislation that protects social media companies from liability from the content posted by users on the sites. This treats online intermediaries as exempt from a host of laws they could otherwise be responsible for what users say and do. This piece of legislation is essential for sites that try to encourage potentially controversial discussions, such as Facebook and YouTube. In fact, without this type of protection, sites would be highly incentivized to censor or ban user content for fear or liability.

Monitoring social media for misinformation while balancing users’ freedom of speech and expression is no easy task. The question of how best to contain the spread of fake news and misinformation is one that we will surely keep asking ourselves as we prepare for the long road that lies ahead leading up to the 2020 election.

Sarah Rodrigue

Leave a Reply

Your email address will not be published. Required fields are marked *