The deal provides access to Clearview tools to the Border Patrol’s Headquarters Intelligence Division (INTEL) and the National Targeting Center, units that collect and analyze data as part of what CBP says is a coordinated effort to “disrupt, degrade and destroy” people and networks seen as security threats.
The contract states that Clearview provides access to “over 60+ billion publicly available images” and that it will be used for “tactical targeting” and “strategic counter-network analysis”, indicating that the service is intended to be incorporated into analysts’ day-to-day intelligence work rather than being reserved for isolated investigations. CBP says its intelligence units draw from “a variety of sources,” including commercially available tools and publicly available data, to identify people and map their connections to national security and immigration operations.
The agreement requires analysts to handle sensitive personal data, including biometric identifiers such as facial images, and requires non-disclosure agreements for contractors who have access. It does not specify what types of photos agents will upload, whether searches can include US citizens, or how long uploaded images or search results will be retained.
The Clearview contract lands as the Department of Homeland Security faces growing scrutiny over how facial recognition is used in federal enforcement actions across the border, including large-scale crackdowns in US cities that affected US citizens. Civil liberties groups and lawmakers have questioned whether face-searching devices are being deployed as routine intelligence infrastructure rather than as a limited investigative aid, and whether security measures have kept pace with the expansion.
Last week, Senator Ed Markey introduced legislation that would block ICE and CBP from using facial recognition technology altogether, citing concerns that biometric surveillance is being embedded without clear limits, transparency, or public consent.
CBP did not immediately respond to questions about how Clearview will be integrated into its systems, what types of images agents are authorized to upload, and whether searches could include U.S. citizens.
Clearview’s business model is under scrutiny because it relies heavily on scraping photos from public websites. Those images are converted into biometric templates without the knowledge or consent of the people taking the photographs.
Clearview also appears on DHS’s recently released artificial intelligence list, which is tied to a CBP pilot launched in October 2025. The inventory entry connects the pilot to CBP’s Traveler Verification System, which compares faces at ports of entry and other border-related screening.
CBP states in its public privacy document that the Traveler Verification System does not use information from “commercial sources or publicly available data.” More likely, at launch, ClearView Access will be tied to CBP’s automated targeting system, which combines biometric galleries, watch lists, and enforcement records, including files related to recent Immigration and Customs Enforcement operations in areas of the US far from any border.
Clearview AI did not immediately respond to a request for comment.
Recent testing by the National Institute of Standards and Technology, which evaluated Clearview AI among other vendors, found that face-search systems can perform well on “high-quality Visa-like photographs” but falter in less controlled settings. Federal scientists say images captured at border crossings that “were not originally intended for automated facial recognition” produced error rates that were “very high, often exceeding 20 percent, even with more accurate algorithms.”
The test highlights a central limitation of the technology: NIST found that the face-search system cannot reduce false matches, while also increasing the risk that the system fails to recognize the correct person.
As a result, NIST says agencies can operate the software in an “exploratory” setting that returns a ranked list of candidates for human review rather than a single confirmed match. However, when the system is configured to always return candidates, searching for people not already in the database will still generate “matches” for review. In those cases, the results will always be 100 percent wrong.
<a href