In partnership with airlines, U.S. Customs and Border Protection (CBP) has employed facial recognition at major airports to verify travelers’ identities as part of its Biometric Entry-Exit Program. But while the agency purports to have taken steps to incorporate privacy principles, it hasn’t consistently provided information to passengers about how the program works. That’s according to a U.S. Government Accountability Office (GAO) report published this week, which also found that CBP fell short in areas including partner auditing and performance testing.
As early as 2016, CBP began laying the groundwork for the multi-billion-dollar Biometric Entry-Exit Program, partnering with airlines like Delta to establish how face-based passenger screenings might work. CBP has access to passenger manifests, which it uses to build facial recognition databases that also incorporate photos from entry inspections, U.S. visas, and other U.S. Department of Homeland Security corpora. Camera kiosks at airports capture live photos and compare them with photos in the database, algorithmically attempting to identify matches. When there’s no existing photo available for matching, the system compares the live photos to photos from physical forms of identification, like passports and travel documents.
As of March 2020, CBP has deployed facial recognition technology to 27 airports for travelers exiting the U.S. and 18 airports for travelers entering the U.S. According to the agency, the program has biometrically identified over 23 million travelers on more than 250,000 flights and helped to identify seven impostors.
Eligible foreign nationals and U.S. citizens can opt out of facial recognition if they choose. But in its report, the GAO says the resources it found regarding CBP’s program at ports of entry, online, and call centers provided limited information and weren’t always complete. CBP’s public websites didn’t accurately reflect where facial recognition technology was being used or tested as of June 2020, even after the GAO raised the issue with leadership in May 2020. And at least one CBP call center information operator the GAO reached in November 2019 wasn’t aware of which locations had deployed the technology.
Moreover, the GAO reports that some signs at airport gates where CPB is using facial recognition are outdated, missing, or obscured. At the Las Vegas McCarran International Airport in September 2019, one sign said photos of U.S. citizens would be held for up to 14 days, while a second sign at a different gate said photos would be held for up to 12 hours — the correct information. At the same airport, no privacy signs were posted at a gate where facial recognition had been in operation for about two months.
In February, John Wagner, commissioner at the CBP, told members of Congress that CBP is working with airlines to print disclaimers on boarding passes and issue notifications at booking time and when customers receive mobile notifications and emails. The status of this work is unclear.
CBP mandates commercial facial recognition technology partners, contractors, and vendors like NEC to abide by certain data collection and privacy requirements, including restrictions on using traveler photos. But the GAO notes that CBP had audited only one of its more than 20 commercial airline partners as of May 2020 and didn’t have plans to ensure all of its partners were audited for compliance. That’s even after a CPB subcontractor breach in June 2019 exposed millions of photos of passengers traveling into and out of the U.S.
CBP’s facial recognition also continues to underperform compared with baselines, according to GAO, and it’s unclear the extent to which it might exhibit bias against certain demographic groups. In a CBP test conducted from May to June 2019, the agency found that 0.0092% of passengers leaving the U.S. were incorrectly identified, a fraction that could translate to a total in the millions. (CBP inspects and estimated over 2 million international travelers every day.) More damningly, photos of departing passengers were successfully captured only 80% of the time due to camera outages, incorrectly configured systems, and other confounders. In one airport, the match failure rate was 25%.
The five-person team of CBP officials charged with identifying problems only randomly sample two flights per airport per week, according to the GAO, and the monitoring process doesn’t alert them when performance falls below minimum thresholds. The implication is that an issue at a particular terminal or airport could continue unabated for days or weeks without CBP’s knowledge.
The GAO doesn’t rule out the possibility of bias as one factor contributing to facial recognition errors. While CBP’s own analysis of scanned passengers leaving the U.S. showed a “negligible” effect in accuracy based on demographics, the study was limited in scope because CBP doesn’t have access to travelers’ races and ethnicities. CBP had planned to incorporate recommendations from the U.S. National Institute of Standards and Technology by spring 2020, but the pandemic pushed the work back.
While the GAO’s findings aren’t exactly revelatory, they point to an uneven — and problematic — rollout of the Biometric Entry-Exit Program. At the least, CBP appears to be poorly communicating the program’s policies and failing to audit its partners. At the worst, the agency is failing to account for facial recognition systems’ technological shortcomings and proclivity toward bias.
The GAO lays out recommendations it believes might help CBP to address the current challenges, like publishing privacy notices and conducting more regular system performance monitoring. But some challenges — like bias — might be politically, technologically, and logistically insurmountable. And as CBP looks to expand biometric matching beyond airports to additional seaports and land borders, that’s cause for concern.