In the world of ever-increasing technological innovation, we are often found in a catch-22, where security and privacy interests conflict. Indeed, we are regularly forced into situations where our basic human instinct for protection and security competes with our (perhaps uniquely American) expectations of privacy. The airport is one place where this dilemma materializes on a daily basis, especially considering the U.S. Customs and Border Protection’s (CBP) increased reliance on facial recognition technology in security lines.

In the aftermath of 9/11, a federal commission recommended the use of biometric screening systems at international ports in the United States. This proposal was thrust into the limelight in 2017 when President Trump signed an executive order that required the use of facial recognition identification for all passengers–including American citizens–traveling on international flights by 2021. Currently, the Transportation Security Administration (TSA) and several airlines are partnering with the CBP to sponsor pilot programs that test the efficiency of facial recognition in lieu of traditional forms of identification and boarding passes. Delta, United, American, and JetBlue are among the airlines that have already implemented the use of facial recognition technology or are sponsoring pilot programs in at least nineteen airports across the country.

The process of facial recognition in airports is relatively simple. First, a passenger will have his or her photo taken at a checkpoint–a process that involves staring into a box for a few seconds. The camera is linked to the CBP’s Traveler Verification Service, which matches the new photo to archived photos stored in the Department of Homeland Security’s database. These archived images include traditional forms of identification, such as passport and license photos, as well as mugshots. The database then compares facial features, like the distance between a person’s eyes and the distance from a person’s forehead to their chin. Once the photos are compared, the system reports whether a passenger is clear to board the aircraft.

The CBP claims that its facial recognition software has a 97% match rate and saves an average of nine minutes during the overall boarding process as compared to the traditional boarding method. Passengers may choose to opt-out of facial recognition identification in favor of the traditional boarding procedure. Even with this option, Delta reports that only 2% of its customers choose to opt-out of the service. Opponents of facial recognition argue that these low opt-out figures are due to the fact that passengers are not adequately notified of their right to opt-out.

Because the facial recognition software uses geometrically-specific facial features, the CBP claims that the process is much more secure than traditional forms of identification. Although it may be secure and efficient, facial recognition technology raises concerns over privacy and racial bias.

Opponents argue that the spread of facial recognition technology is dangerous because the market is largely unregulated by the federal government. Thus, they fear that certain organizations, namely law enforcement agencies, will access the CBP’s vast database of biometric data to surveil and charge alleged criminals. In the absence of federal regulations, some states and municipalities have begun limiting police access to facial recognition data, including that collected through airport security. In San Francisco, for example, officers are prohibited from using facial recognition technology on an arrested individual. Similarly, California recently introduced AB 1215–also known as the Body Camera Accountability Act–to stop the state’s law enforcement from adding biometric surveillance technology to officer-worn body cameras. Substantially similar regulations have been adopted or proposed in Oakland, CA; Portland, OR; and a few cities in Massachusetts. Although cities and states are taking active measures to regulate the increasing use of facial recognition technology, opponents hope that the federal government will take similar steps toward strict regulation.

Another concern often raised by opponents is the idea that facial recognition technology will exacerbate racial biases. Studies by the ACLU and MIT have found that facial recognition software has falsely identified people of color at disproportionate rates. According to certain human rights groups, facial recognition technologies will force people of color to face unnecessary obstacles when traveling. Finally, there are concerns that nefarious actors will hack the biometric databases to collect personally identifiable information of aircraft passengers.

The CBP believes that potential privacy issues are mitigated by the fact that passengers’ photos are deleted at certain times. For American citizens, photos are deleted within twenty-four hours of travel. For non-US citizens, photos are deleted within fifteen days of travel, unless the software identifies the passenger for additional questioning. The CBP also points to the fact that 185 people who tried to enter the U.S. under false credentials were successfully identified through the facial recognition process.

It is clear that facial recognition in airports provides benefits and drawbacks. Yes, biometric information likely does increase security at international points of entry. The statistics also point to the technology’s ability to increase travel efficiency. No longer will passengers have to dig through their luggage or pockets to locate boarding passes or passports. However, these positive aspects are countered by legitimate concerns of privacy. When can agencies access the biometric information collected through airport security for use in a court of law? Some states have spoken on this issue, but the relative silence from the federal government is deafening.

A few recent reports suggest that federal lawmakers are currently drafting a bipartisan bill to address the issues surrounding facial recognition, but nobody can predict the regulation’s scope. Already, large technology corporations, including Amazon and Microsoft, are lobbying for certain standards. Regardless of the regulatory scheme eventually proposed, it is clear that the expanding use of facial recognition technology is ripe for federal commentary.

Spencer Davies

Tagged with:
 

Leave a Reply

Your email address will not be published. Required fields are marked *