People v. Robots: A Roadmap for Enforcing California’s New Online Bot Disclosure Act

Barry Stricke · 22 Vand. J. Ent. & Tech. 839

Abstract

Bots are software applications that complete tasks automatically. A bot’s communication is disembodied, so humans can mistake it for a real person, and their misbelief can be exploited by the bot owner to deploy malware or phish personal data. Bots also pose as consumers posting online product reviews or spread (often fake) news, and a bot owner can coordinate multiple social-network accounts to trick a network’s “trending” algorithms, boosting the visibility of specific content, sowing and exacerbating controversy, or fabricating an impression of mass individual consensus. California’s 2019 Bolstering Online Transparency Act (the “CA Bot Act”) imposes conspicuous disclosure requirements on bots when they communicate or interact with humans in California. Call it Isaac Asimov’s fourth Rule of Robotics: A robot may not pretend to be a human being. By requiring bots to “self-identify” as such, the CA Bot Act is a pioneer in laws regulating artificial intelligence. Most of its criticism points to the act’s lack of an enforcement mechanism to incentivize compliance. Accordingly, this Article lays out a map to sanction violations of the act with civil actions under California’s Unfair Competition Law and statutory tort law of fraudulent deceit. It outlines what is prohibited, who can be sued, and who has standing to sue, then addresses First Amendment limits on unmasking John Doe defendants via subpoena. For many reasons, attempts to hold CA Bot Act violators liable are most likely to prevail in the commercial arena. But a willful use of bots to undermine a political election or prevent voting might also be a worthy target. Ultimately, the law could be strengthened with an articulated enforcement provision. But if the CA Bot Act aims a first salvo against malicious online bots, this Article hopes to spark the powder.