For the past two decades, police departments around the country have used facial-recognition tools to aid them in solving crimes. These tools have traditionally limited law enforcement officers to searching only government-provided images. A small company called Clearview AI has recently released a new tool that provides users with public images collected from popular social media websites. Law enforcement agencies are starting to publicly react to the use of the application within police departments to capture suspects.

On January 24th, the New Jersey State Attorney General, Gurbir S. Grewal, ordered all New Jersey police departments to stop using the Clearview AI application. In issuing the order, Mr. Grewal cited his concerns about data privacy, the security of law enforcement, and the integrity of law enforcement investigations. Mr Grewal was unaware of Clearview AI and its application until the New York Times reported, in the last week, that Clearview AI had created a database containing more than three billion photos of people and was advertising the application’s use to law enforcement agencies across the country.

The application “scrapes” photos from websites such as Facebook, Twitter, and Youtube. The app allows the user to take and upload a picture of a person. The app will return “scraped” public photos of that person and links to where the photo can be found. Individuals can prevent the application from scraping their photos by privatizing their social media profiles. However, once a public photo is scraped, Clearview AI keeps the image, even if the individual later deletes or privatizes the photo. Clearview AI has licensed its application to law enforcement agencies and security companies. The company alleges that more than six hundred law enforcement agencies across the country have used the application in the past year.

Clearview AI’s application adds to the growing list of concerns of using facial-recognition technology, which has always been controversial. Many criticize facial-recognition technology for producing false matches for certain groups of people in society, such as people of color. Because the application’s database of photos will only continue to grow, the risk of misidentification will also increase. This risk is heightened with Clearview AI because independent experts have not reviewed the application. There are also concerns about data security because law enforcement officers are uploading sensitive photos to the application without knowing if or how the application protects this data.

A Clearview AI legal advisor assured law enforcement agencies that they would not be violating the Constitution or existing privacy laws if they used the application for its intended purpose. Additionally, the memo advised law enforcement agencies that they do not need to disclose the use of the application to defendants, as long as the application’s positive identification was not the sole basis for getting an arrest warrant.

Mr. Grewal is not opposed to New Jersey police departments using Clearview AI in the future if the application makes it easier for law enforcement agencies to prevent and solve crimes. However, Mr. Grewal would first like to know more about how the application gathers and protects its data. In addition to issuing the order, Mr. Grewal has asked New Jersey’s Division of Criminal Justice to investigate how the state’s law enforcement agencies have used the app and whether Clearview AI is tracking information about the state’s police investigations.

Others are also weary of Clearview AI and its tactics. Many of the websites that Clearview AI scrapes public images from have policies that prohibit the scraping of users’ images. Twitter sent the group a cease-and-desist letter, demanding the application to delete any data that it scraped from the social media website. Facebook and LinkedIn are also investigating the application’s tactics for possible violations of scraping information from both platforms.

There are broader concerns that unregulated facial-recognition technology, such as Clearview AI, can be abused and lead to warrantless searches. For example, the computer code that underlies Clearview AI’s application includes programming language that would allow users to pair the application with augmented-reality glasses. There is a concern that users could use the enhanced tool to identify activists at a protest or gather personal information of any stranger on the street. Officials at Clearview AI have stated that they do not intend to release this technology, but the possibility of such technology existing continues to fuel the debate surrounding the use of facial-recognition technology in our society. Regardless of the debate, this may not be the last time we hear about Clearview AI, as federal law enforcement agencies are testing out the technology.

––Mallory McCarthy

Leave a Reply

Your email address will not be published. Required fields are marked *