- Journal Archives
- Volume 16
- Volume 15
- Volume 14
- Volume 13
- Volume 12
- Volume 11
- Volume 10
- Volume 9
- Volume 8
- Volume 7
- Volume 6
- Volume 5
- Volume 4
- Volume 3
- Volume 2
- Volume 1
Apple, an integral part of the California technology landscape, has found itself involved in app privacy litigation before. And now, it has been made party to a suit (Pirozzi v. Apple, Inc., 12-cv-01529-JST (N.D. Cal. Aug. 3, 2013)) alleging that it allowed app developers to access personal information such as address books, photos, and videos when users granted permission to access only their current location. The plaintiff brought state claims of unfair competition, false advertising, consumer remedies violations, negligent misrepresentation, and unjust enrichment. Apple moved to dismiss on all claims, but seemed to be firing blanks: it only managed to get the “claim” of unjust enrichment dismissed. (As the order explains, under California law unjust enrichment is “a basis for obtaining restitution based on quasi-contract or imposition of a constructive trust,” not an independent claim.)
Seemingly crucial to the court’s refusal, Apple had allegedly made claims appearing to promise some measure of security and safety to users of iOS, its operating system for iPhones and iPads, including:
- “iOS 4 is highly secure from the moment you turn on your iPhone.”
- “All apps run in a safe environment, so a website or app can’t access data from other apps.”
- “Apple takes precautions — including administrative, technical, and physical measures — to safeguard your personal information against loss, theft, and misuse, as well as against unauthorized access, disclosure, alteration, and destruction.”
At the dismissal stage, facts like these are assumed to be true, but once the claim moves forward, all of this would still need to be proven at trial. But assuming these allegations are correct, what should we make of these statements, and of the apparently loose app behavioral controls? Are statements offering “highly secure” systems, “safe” environments, and “safeguards [for] personal information” promises? Advertising puffery? Does Apple’s “walled garden” approach make it more responsible (than, say, Google is for the more open Android) for what goes on in that garden? Has Google insulated itself from charges like this by developing more granular permissions for Android apps? Or is Google more at risk precisely because its Android platform is more open?
Does it (and should it) matter that security researchers at Georgia Tech recently identified a security flaw in Apple’s app store that allows apps to “turn” malicious after they make it through the app review process? (They even named proof-of-concept app Jekyll.) In other words, when an app developer misbehaves, when if ever should a platform developer be held even partly responsible?
[H/T Eric Goldman]
Recent Blog Posts
- Bad Boys, Whatcha Gonna Do When the Police Cam Catches You?
- Government Settles in DEA Facebook Impersonation Controversy
- Nickelodeon’s Kids v. Google
- Ivanpah Solar Plant’s Firey Clash of Environmental Objectives
- The Silk Road: An Insight Into the Future of Internet Regulation?
- JETLaw Symposium on Intellectual Property Tomorrow
Tagsadvertising antitrust Apple books career celebrities contracts copyright copyright infringement courts creative content criminal law entertainment Facebook FCC film/television financial First Amendment games Google government intellectual property internet JETLaw journalism lawsuits legislation media medicine Monday Morning JETLawg music NFL patents privacy progress publicity rights radio social networking sports Supreme Court of the United States (SCOTUS) technology telecommunications trademarks Twitter U.S. Constitution