- Journal Archives
- Volume 16
- Volume 15
- Volume 14
- Volume 13
- Volume 12
- Volume 11
- Volume 10
- Volume 9
- Volume 8
- Volume 7
- Volume 6
- Volume 5
- Volume 4
- Volume 3
- Volume 2
- Volume 1
Apple, an integral part of the California technology landscape, has found itself involved in app privacy litigation before. And now, it has been made party to a suit (Pirozzi v. Apple, Inc., 12-cv-01529-JST (N.D. Cal. Aug. 3, 2013)) alleging that it allowed app developers to access personal information such as address books, photos, and videos when users granted permission to access only their current location. The plaintiff brought state claims of unfair competition, false advertising, consumer remedies violations, negligent misrepresentation, and unjust enrichment. Apple moved to dismiss on all claims, but seemed to be firing blanks: it only managed to get the “claim” of unjust enrichment dismissed. (As the order explains, under California law unjust enrichment is “a basis for obtaining restitution based on quasi-contract or imposition of a constructive trust,” not an independent claim.)
Seemingly crucial to the court’s refusal, Apple had allegedly made claims appearing to promise some measure of security and safety to users of iOS, its operating system for iPhones and iPads, including:
- “iOS 4 is highly secure from the moment you turn on your iPhone.”
- “All apps run in a safe environment, so a website or app can’t access data from other apps.”
- “Apple takes precautions — including administrative, technical, and physical measures — to safeguard your personal information against loss, theft, and misuse, as well as against unauthorized access, disclosure, alteration, and destruction.”
At the dismissal stage, facts like these are assumed to be true, but once the claim moves forward, all of this would still need to be proven at trial. But assuming these allegations are correct, what should we make of these statements, and of the apparently loose app behavioral controls? Are statements offering “highly secure” systems, “safe” environments, and “safeguards [for] personal information” promises? Advertising puffery? Does Apple’s “walled garden” approach make it more responsible (than, say, Google is for the more open Android) for what goes on in that garden? Has Google insulated itself from charges like this by developing more granular permissions for Android apps? Or is Google more at risk precisely because its Android platform is more open?
Does it (and should it) matter that security researchers at Georgia Tech recently identified a security flaw in Apple’s app store that allows apps to “turn” malicious after they make it through the app review process? (They even named proof-of-concept app Jekyll.) In other words, when an app developer misbehaves, when if ever should a platform developer be held even partly responsible?
[H/T Eric Goldman]
Recent Blog Posts
- BREAKING: Sen. Feinstein Accuses CIA of Spying on Senate Computers
- Law Requiring the Microchipping of Exotic Pets Held Constitutional
- Comcast Plus Time Warner, Cable’s Dr. Jekyll or Mr. Hyde?
- Monday Morning JETLawg
- College Football Players: Students or Employees?
- Some (Mildly) Good, and Some (Really) Bad News for Bitcoin
Tagsadvertising antitrust Apple books career celebrities contracts copyright copyright infringement courts creative content criminal law entertainment Facebook FCC film/television financial First Amendment games Google government information security intellectual property internet JETLaw journalism lawsuits legislation media medicine Monday Morning JETLawg music NFL patents privacy progress publicity rights radio social networking sports technology telecommunications trademarks Twitter U.S. Constitution