California is proving to be a hotbed for mobile app privacy litigation. Its attorney general has developed mobile app privacy guidelines [PDF] to help app developers, app platform providers, advertising networks, and others understand the state’s privacy requirements. Last fall, it notified dozens of app developers that they had failed to comply with its privacy policy notice requirement, and it ended up filing suit over against Delta Airlines over its app. (While Delta eventually won dismissal on federal preemption grounds, companies outside the airline industry have no such recourse.) California even includes an explicit right to privacy in Article 1, Section 1 of its constitution. The state takes privacy very seriously, and as a center of technological innovation, its stance has real impact.

Apple, an integral part of the California technology landscape, has found itself involved in app privacy litigation before. And now, it has been made party to a suit (Pirozzi v. Apple, Inc., 12-cv-01529-JST (N.D. Cal. Aug. 3, 2013)) alleging that it allowed app developers to access personal information such as address books, photos, and videos when users granted permission to access only their current location. The plaintiff brought state claims of unfair competition, false advertising, consumer remedies violations, negligent misrepresentation, and unjust enrichment. Apple moved to dismiss on all claims, but seemed to be firing blanks: it only managed to get the “claim” of unjust enrichment dismissed. (As the order explains, under California law unjust enrichment is “a basis for obtaining restitution based on quasi-contract or imposition of a constructive trust,” not an independent claim.)

Seemingly crucial to the court’s refusal, Apple had allegedly made claims appearing to promise some measure of security and safety to users of iOS, its operating system for iPhones and iPads, including:

  • “iOS 4 is highly secure from the moment you turn on your iPhone.”
  • “All apps run in a safe environment, so a website or app can’t access data from other apps.”
  • “Apple takes precautions — including administrative, technical, and physical measures — to safeguard your personal information against loss, theft, and misuse, as well as against unauthorized access, disclosure, alteration, and destruction.”

At the dismissal stage, facts like these are assumed to be true, but once the claim moves forward, all of this would still need to be proven at trial. But assuming these allegations are correct, what should we make of these statements, and of the apparently loose app behavioral controls? Are statements offering “highly secure” systems, “safe” environments, and “safeguards [for] personal information” promises? Advertising puffery? Does Apple’s “walled garden” approach make it more responsible (than, say, Google is for the more open Android) for what goes on in that garden? Has Google insulated itself from charges like this by developing more granular permissions for Android apps? Or is Google more at risk precisely because its Android platform is more open?

Does it (and should it) matter that security researchers at Georgia Tech recently identified a security flaw in Apple’s app store that allows apps to “turn” malicious after they make it through the app review process? (They even named proof-of-concept app Jekyll.) In other words, when an app developer misbehaves, when if ever should a platform developer be held even partly responsible?

–Brad Edmondson

Image Source

[H/T Eric Goldman]

Tagged with:

Comments are closed.