How AI Is Being Used For Border Surveillance
#Security

How AI Is Being Used For Border Surveillance

Startups Reporter
2 min read

U.S. Customs and Border Protection is developing autonomous AI surveillance systems to monitor border crossings, aiming to reduce human oversight while raising concerns about migrant safety and human rights implications.

featured image - How AI Is Being Used For Border Surveillance

U.S. Customs and Border Protection (CBP) is advancing artificial intelligence systems designed to autonomously monitor border activity, shifting away from human-operated surveillance. This initiative, detailed in recent agency documents, intends to automate detection of border crossings while reducing personnel requirements—a move experts warn could endanger migrants and exacerbate existing immigration system challenges.

The Automation Push

CBP's January 2026 Industry Day briefing revealed plans to address technology gaps through AI integration. Agency documents indicate current surveillance systems require constant human monitoring, leading to missed detections during extended shifts. The proposed solution involves:

  • Creating unified operating systems for land, air, and subterranean monitoring
  • Upgrading mobile surveillance fleets
  • Implementing persistent real-time surveillance in remote areas
  • Shifting from 1 to 9 autonomous systems out of 12 border monitoring components

According to 2022 planning documents, non-AI solutions would 'increase staff requirements,' while autonomous systems could 'reduce personnel needed for surveillance monitoring.'

Technology and Tradeoffs

The agency's AI inventory includes over 50 tools, ranging from biometric identification and geospatial analysis to asylum fraud detection. One proposed system would link ground sensors to AI-controlled camera towers capable of identifying, classifying, and tracking movement without human intervention.

Dave Maass of the Electronic Frontier Foundation notes: 'For decades, sensor-triggered surveillance faced technical limitations. Now systems like Anduril's AI towers automate detection and tracking—other vendors are racing to match these capabilities.'

Human Rights Concerns

Advocates highlight significant risks:

  1. Bias amplification: AI systems may produce discriminatory outcomes against vulnerable groups
  2. Dangerous routes: Automated surveillance could push migrants toward more hazardous crossings
  3. Privacy erosion: Mass data collection threatens rights to privacy and non-discrimination

'These technologies lack meaningful consent mechanisms,' explained Mizue Aizeki of the Surveillance Resistance Lab. 'When rights access requires surrendering personal data, and systems operate opaquely, legal challenges become nearly impossible.'

Operational Realities

Despite record border encounters (nearly 250,000 in December 2025), migration experts question AI's effectiveness. Colleen Putzel-Kavanaugh at the Migration Policy Institute observed: 'Automation helps with search-and-rescue operations but doesn't address processing bottlenecks after apprehension. People migrate despite increased surveillance—AI may redirect routes rather than deter movement.'

Samuel Chambers' research indicates expanded surveillance correlates with higher migrant risks: 'Longer crossing times increase dehydration, exhaustion, and fatalities.'

Funding and Future

The Biden administration's 2026 budget includes $101 million for tower maintenance and $6 billion to expand surveillance infrastructure—aiming for 1,000 border towers by 2034. Yet as Maass cautions: 'Industry Day documents repeat the same goals every decade. After 30 years of surveillance failing to solve border challenges, we should question whether AI delivers solutions or just fancier systems.'

Comments

Loading comments...