Imagine, during your morning scroll of the news, you come across a video of President Obama giving an address to the American people. In this seemingly innocuous video, President Obama calls President Trump “a total and complete dipsh*t” and tells the audience to “stay woke b*tches.” Sound absurd? As it turns out, this satirical video actually exists.

Welcome to the age of “deepfakes.”

Deepfake, a shorthand for the terms “deep learning” and “fake,” refers to artificial images and videos created using machine learning algorithms. This technology manipulates existing digital content, spawning images or videos of people saying or doing things they have never actually done. The deepfake parody video of President Obama was created by Buzzfeed and Jordan Peele to serve as a PSA for the potential dystopian effects of this technology. The overall message of the video is clear––don’t believe everything you see and read on the internet.

But in the lead up to the 2020 election, can we trust the American people to discern real videos from deepfake propaganda? Congress doesn’t think so. In June of 2019, Representative Yvette Clark (D-NY) proposed the DEEPFAKES Accountability Act ––Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act (a true example of acronym ingenuity). This Act would require anyone creating synthetic media imitating a person to disclose that the video is altered, using “irremovable digital watermarks” and “textual descriptions.” The Act also provides an avenue for victims of synthetic media to sue the creator of the deepfake content.

However, the DEEPFAKES Accountability Act creates more problems than it solves. While the Act targets malicious deepfakes, it may also sweep up parody images and videos, like Buzzfeed’s PSA, creating significant First Amendment concerns. This result could be chilling for political satirists and may run afoul of traditional free speech jurisprudence. And yet, the Act likely will not curb the political threat posed by deepfakes. The Act gives legal recourse to victims of deepfakes but offers little to deter bad actors. Creators of deepfakes can easily hide their identity by stripping the documentation and metadata of the deepfake. More importantly, once a deepfake goes viral, the political damage has likely already been done.

What, then, is the most effective legal mechanism to protect the electorate from deepfake propaganda? The answer lies not with content creators, but rather with content hosts. Congress should require content-hosting sites, such as YouTube and Facebook, to bear the burden of policing deepfakes. Currently, these sites are insulated from liability for content posted by third parties on their sites by Section 230(c) of the Communications Decency Act. The Act specifically exempts hosts from all liability for moderation activities and takedown strategies. While this exemption provides content hosts with the freedom to experiment with their own moderation solutions, it does not mandate they establish such mechanisms.

Holding content-hosting sites liable for moderating deepfakes is not a perfect solution. However, shifting the liability from content creators to content hosts would more likely achieve the goals of the DEEPFAKES Accountability Act. The threat of deepfake propaganda in the upcoming 2020 election lies primarily in the dissemination, not the creation, of deepfakes. Thus, if content hosts, rather than creators, bear the burden of moderation, the potential for a deepfake to go viral would be stopped in its tracks.

Currently, Congress does not have the tools or understanding to regulate deepfakes effectively. In order to preserve the integrity of the electoral process, Congress can, and should, pass on this responsibility to content hosts.

Carrie Cobb

Tagged with:
 

Comments are closed.