Data privacy infringement hurts in more ways than one. Typical concerns include knowing smartphones track our every move, especially during a pandemic.[1] But real harm occurs when corporations leverage artificial intelligence (AI) against us—when they use each person’s data to target vulnerabilities and exacerbate addictions. Since reinforcing addictions should not be profitable, we should disincentivize online manipulation.

Online manipulation is a “hidden influence,” which personally targets an individual’s characteristics and exploits any vulnerabilities.[2] With the help of AI, corporations behind the screen can exploit vulnerabilities at precisely the right moment with targeted advertisements or “hypernudges.”[3] This can be harmful. Reinforcing a “trigger, behavior, reward” pattern can increase a behavioral addiction (either to digital activity or purchased product).[4] This kind of manipulation is not surprising because it springs from developments in mass surveillance and AI analysis.

Mass Surveillance

Mass surveillance allows data collection—the fuel of AI.[5] Data collection comes in many forms. Data brokers, like Yodlee Inc., collect volumes of transactional data and sell it to profit-seeking third parties.[6] Facebook has complied personal health data including “diet information, exercise activities, ovulation cycle, and intention to get pregnant.”[7]Corporations have access to very precise location data, which can tell quite a story.[8] Apple and Google’s COVID-19 tracing technology can track whether people came within six feet of one another.[9]

Critics downplay data collection’s harm, arguing that the anonymization of data minimizes privacy concerns.[10] But companies can reidentify data very easily. For instance, in the case of Yodlee, the data and patterns are extensive enough to reidentify someone with little additional information. And, because Yodlee assigns a unique code to a particular user, once a single transaction has been positively matched to someone, all the person’s transactions are unmasked.[11] Given the ease of identification, corporations can easily connect a consumer’s transactional, health, location, and internet data to construct a comprehensive profile.

Leveraging Data Against Us

Deep learning AI can analyze hundreds of thousands of records and predict human behavior.[12] AI can predict human behavior based on social media friends’ activities alone.[13] The ability to predict behavior means the ability to predict who will be susceptible to a company’s ads or persuasion techniques. Targeting susceptible individuals effectively trap them into a cycle defined by their or their acquaintance’s past actions.

AI can identify sensitive and personal vulnerabilities. Leaked strategy documents suggest that Facebook could identify teens precisely when they feel “defeated” or like a “failure.”[14] Data brokers infer interests and assign consumers a category: “Expectant Parent,” “Diabetes Interest,” and “Cholesterol Focus.”[15]

A profit-seeking corporation could use this information to help or hurt. A fitness organization might advertise ways to improve cholesterol. But on the other hand, another company could use that same intel to exploit someone’s tendency to live a sedentary lifestyle. They could trigger and reinforce harmful addictions without facing legal disincentives.


Reinforcing a behavioral addiction should not be profitable. Big tech companies should bear the costs of externalities they create if they prey on our susceptibilities. Some argue that the individual bears responsibility to resist digital temptations.[16] But critics should consider the power disparity between the lone consumer and the limitless corporations behind the screen. According to Silicon Valley ethicist, Tristan Harris, “[t]here’s a thousand people on the other side of the screen whose job it is to break down whatever responsibility [you] can maintain.”[17] And those thousand people are trained in behavior change psychology,[18] informed by your data, and armed with AI. In other words, the odds are stacked against anyone trying to resist the attention-grabbing forces in their pocket.

One solution is twofold: (1) impose a fiduciary relationship between data holders and identified persons within the dataset, and (2) incentivize data anonymization. These restrictions offer companies the choice: either respect individual autonomy or limit your ability to target individuals.

Regulators could impose a fiduciary relationship between the data holder and the corresponding person.[19] A fiduciary duty already exists in relationships where there is a significant power disparity (lawyers and doctors), as is the case here. Data holders may know more than the person about his or her own vulnerabilities, and thousands of employees behind the screen are working to subvert the person’s decision-making. In the face of this power disparity, the fiduciary duties of care and loyalty would prevent the data holder from using data against a person’s best interests. While targeted advertising would still be permitted, a company could not target someone’s vulnerabilities. Any regulation should list protected vulnerabilities like obesity, alcoholism, and depression. Since fiduciary duties are burdensome, the duty would dissipate for anonymous datasets. Anonymous means the data is not associated with personally identifiable information, devices, or locations. But once data is reidentified, a fiduciary relationship resumes.

Online manipulation is a hidden evil, but we can end its most harmful effects. Implementing these reforms will shift the externalities onto companies that use mass surveillance and AI to manipulate vulnerable people.

–Erik Birnel is a 2020 graduate of Gonzaga Law. He recently accepted an associate attorney position at a firm specializing in employment and elder law (position pending bar exam results). Erik lives in Spokane Valley, WA with his wonderful wife (and hospital nurse) Rebecca.

[1] Ido Kilovaty & Mason Marks, A Right to Digital Self-Defense Will Prevent Abuse of COVID-19 Surveillance Apps, The Hill(Apr. 19, 2020, 11:00 AM),

[2] See Daniel Susser et al., Online Manipulation: Hidden Influences in a Digital World, 4 Geo. L. Tech. Rev. 1 (2019).

[3] See Daniel Susser et al., Online Manipulation: Hidden Influences in a Digital World, 4 Geo. L. Tech. Rev. 1 (2019).

[4] See Judson Brewer, The Craving Mind (2017) 18–19 (Defining addiction as continued used despite adverse consequences and outlining the trigger, behavior, reward cycle).

[5] See Ryan Calo, Artificial Intelligence Policy: A Primer & Roadmap, 51 U.C. Davis L. Rev. 399, 405 (2017).

[6] See Joseph Cox, Leaked Document Shows How Big Companies Buy Credit Card Data on Millions of Americans, Motherboard Tech by Vice (Feb. 19, 2020, 7:47 AM),

[7] See Jianyan Fang, Health Data at Your Fingertips: Federal Regulatory Proposals for Consumer-Generated Mobile Health Data, 4 Geo. L. Tech. Rev. 125 (2019).

[8] See Stuart A. Thompson & Charlie Warzel, One Nation, Tracked: An Investigation Into the Smartphone Tracking Industry From Times Opinion, The New York Times, (Dec. 19, 2019),

[9] Ido Kilovaty & Mason Marks, A Right to Digital Self-Defense Will Prevent Abuse of COVID-19 Surveillance Apps, The Hill(Apr. 19, 2020, 11:00 AM),

[10] See Joseph Cox, Leaked Document Shows How Big Companies Buy Credit Card Data on Millions of Americans, Motherboard Tech by Vice (Feb. 19, 2020, 7:47 AM),

[11] Yyes-Alexandre de Montjoye et al., Unique in the Shopping Mall: On the Reidentifiability of Credit Card Metadata, Science, Jan. 30, 2015, at 536–39, As a group of scholars so succinctly describe the de-anonymization process: “[L]et’s say that we are searching for Scott in a simply anonymized credit card data set . . . We know two points about Scott: he went to the bakery on 23 September and to the restaurant on 24 September. Searching through the data set reveals that there is one and only one person in the entire data set who went to these two places on these two days . . . Scott is reidentified, and we now know all of his other transactions, such as the fact that he went shopping for shoes and groceries on 23 September, and how much he spent.”

[12] Will Knight, The Dark Secret at the Heart of AI, MIT Technology Review, (Apr. 11, 2017),

[13] Luca Luceri, et al., Analyzing and Inferring Human Real-Life Behavior through Online Social Networks with Social Influence Deep Learning, Applied Network Science, June 13, 2019,

[14] See Daniel Susser et al., Online Manipulation: Hidden Influences in a Digital World, 4 Geo. L. Tech. Rev. 1, 6 (2019) (noting that Facebook claimed the leaked strategy report was misleading, but the company did not deny their capability to do it, “leaving many to wonder if all that stands between us and this kind of purported manipulation is Facebook’s company policies.”).

[15] Fed. Trade Comm’n, Data Brokers: A Call for Transparency and Accountability, 47 (May 2014),

[16] See David C. Vladek, Digital Marketing, Consumer Protection, and the First Amendment: A Brief Reply to Professor Ryan Calo, 82 Geo. Wash. L. Rev. Arguendo 156 (2014).

[17] Bianca Bosker, The Binge Breaker, The Atlantic (Nov. 2016),

[18] Bianca Bosker, The Binge Breaker, The Atlantic (Nov. 2016),

[19] See Jack M. Balkin, Lecture, Information Fiduciaries and the First Amendment, 49 U.C. Davis L. Rev. 1183 (2016).


Leave a Reply

Your email address will not be published. Required fields are marked *