- Journal Archives
- Volume 19
- Volume 18
- Volume 17
- Volume 16
- Volume 15
- Volume 14
- Volume 13
- Volume 12
- Volume 11
- Volume 10
- Volume 9
- Volume 8
- Volume 7
- Volume 6
- Volume 5
- Volume 4
- Volume 3
- Volume 2
- Volume 1
Google is one of the biggest names in the technology industry. It directly holds a great deal of valuable data, it accounts for a significant percentage of all Internet traffic (over 6% in 2010), and it dominates the other search providers in the race to channel searches to web pages (one measure ascribes to Google more than 90% of all search referrals in 2011). For all three reasons, it has long been a high-profile target of cybercriminals.
Fortunately, Google has also developed one of the most well-respected information security teams in the world, employing security engineers, code reviewers, security researchers–and penetration testers, who push and prod on the security controls implemented throughout the company to ensure they are working properly. In information security parlance, a team of penetration testers that has been given a broad right (and responsibility) to break into the systems maintained by the rest of the company is called a red team. Red teams are designed to bring organizational weaknesses, or areas showing opportunity for improvement, to the surface, so they can be quite controversial. Red-team exercises have become generally accepted as an essential element in commonly used information security control frameworks, including those released by the security training organization SANS, the National Institiute of Standards and Technology, and the Department of Defense (pdf).
While Google, like other large software and service providers, has likely had red-team information security experts deployed for many years, the company is applying the red-team concept to a new area: privacy. Google has posted an opening for a new position called “Data Privacy Engineer, Privacy Red Team.” The successful applicant will join the “Privacy Red Team” to “independently identify, research, and help resolve potential privacy risks across all of our products, services, and business processes.” If Google really intends to transpose the red-team concept from data security to data privacy, part of the engineer’s job will likely be the unannounced inspection of products across Google’s portfolio.
Will privacy red teams catch on as a new practice among cloud service providers? Should they? The industry may be voluntarily changing on its own: although it appears to lag somewhat behind the rest of the industry in privacy governance, Amazon recently hired its first privacy counsel. Does Google’s move represent the next step in commercially reasonable privacy and security practices for cloud service providers, or is it a one-off? Should those companies be held to account if they do not hire privacy testers, and a breach in privacy (as opposed to security) harms the owner of the data? For that matter, should they be held to account if they do not hire security red teams and suffer a security breach?
Recent Blog Posts
- EPA Issues 2017 Renewable Fuel Targets Amid RINs Market’s Uncertain Future
- Cell Phone Firmware Avoids Anti-virus Scans, Sends Private Data to China
- The Consumer Review Fairness Act: Protecting Consumers Who Post Negative Reviews On The Internet
- Google Fiber Nashville Litigation
- Brexit and the Future of UK Sports
- The U.S. is Losing the Economic Drone War
Tagsadvertising antitrust Apple books career celebrities contracts copyright copyright infringement courts creative content criminal law entertainment Facebook FCC film/television financial First Amendment games Google government intellectual property internet JETLaw journalism lawsuits legislation media medicine Monday Morning JETLawg music NFL patents privacy progress publicity rights radio social networking sports Supreme Court of the United States (SCOTUS) technology telecommunications trademarks Twitter U.S. Constitution