The 2026 Pwn2Own Automotive competition uncovered 76 zero-day vulnerabilities in electric vehicle chargers and infotainment systems, with ethical hackers earning over $1 million in prizes. Meanwhile, a French privacy fine and a new AI security testing framework highlight the evolving landscape of digital rights and security.
The automotive industry faced a stark reminder of its digital vulnerabilities last week as the third annual Pwn2Own Automotive competition in Tokyo exposed 76 unique zero-day vulnerabilities across critical systems. The event, organized by Trend Micro's Zero Day Initiative, resulted in over $1 million in payouts to ethical hackers who successfully exploited targets ranging from Tesla's infotainment system to commercial EV chargers.

Automotive Systems Under Siege
The competition structure pits security researchers against real-world automotive targets in a controlled environment. Participants submit exploit plans and have limited time to demonstrate their attacks. Prizes scale based on the vulnerability's uniqueness, impact, and complexity, creating a market-driven approach to security research.
The most significant single exploit came from the Fuzzware.io team, who claimed the largest payout of $60,000 for discovering an out-of-bounds write vulnerability in the Alpitronic HYC50 EV charger. This single vulnerability allowed them to earn six points toward the competition's scoring system. Over three days, the team successfully demonstrated seven exploits, earning them the "Master of Pwn" title with 28 total points and $215,500 in winnings.
The HYC50 charger proved particularly vulnerable. In addition to Fuzzware's attack, another team exploited a Time-of-Check to Time-of-Use (TOCTOU) vulnerability, using it to install a playable version of Doom on the charger's display—a demonstration that earned them $20,000. A third team also compromised the same charger by exploiting an exposed "dangerous" method in its software.
Tesla's infotainment system wasn't spared either. The Synacktiv team successfully took full control by chaining an information leak vulnerability with an out-of-bounds write, demonstrating how attackers could potentially gain control of critical vehicle functions. Automotive Grade Linux, an open-source platform used by multiple manufacturers, was also compromised through a trio of vulnerabilities.
Legal and Privacy Implications
While automotive security researchers worked to expose vulnerabilities, European regulators imposed a €3.5 million fine on an unnamed company for systematic privacy violations. The French National Commission on Informatics and Liberty found that the company had been sharing customer loyalty data—including email addresses and telephone numbers—with an unnamed social network for targeted advertising since February 2018.
The violation affected over 10.5 million Europeans across 16 countries, representing a massive breach of both the EU General Data Protection Regulation (GDPR) and the French Data Protection Act. The regulator's decision to withhold the company's name reflects a growing tension between public transparency and corporate privacy, even when companies violate user privacy on an industrial scale.
AI Security Enters a New Era
The intersection of artificial intelligence and security took center stage with two significant developments. First, security firm Miggo disclosed a vulnerability in Google's Gemini AI that exposed how AI systems can be manipulated through prompt injection attacks.
The vulnerability exploited Gemini's ability to parse Google Calendar events. When users asked Gemini for their daily schedule, the AI would review their calendar and provide a summary. However, a malicious calendar invitation containing a carefully crafted prompt-injection payload hidden in the event description could cause Gemini to write a summary of private meetings into a newly created calendar event. In many enterprise configurations, this new event would be visible to the attacker without clearly disclosing that Gemini had created it.
Google has since patched the exploit, but Miggo emphasized that this incident highlights a fundamental shift in security thinking. "Effective protection must employ security controls that treat LLMs as full application layers with privileges that must be carefully governed," the company stated. This represents a move away from viewing AI merely as a tool and toward recognizing it as an autonomous agent with access to sensitive data and systems.
Establishing Safe Harbor for AI Research
Recognizing the legal ambiguity surrounding AI security testing, HackerOne published a new "Good Faith AI Research Safe Harbor" document. This framework aims to establish clear rules for ethical hackers testing AI systems, addressing a gap where traditional vulnerability disclosure frameworks don't neatly apply to AI models.
"Organizations want their AI systems tested, but researchers need confidence that doing the right thing won't put them at risk," said Ilona Cohen, HackerOne's chief legal and policy officer. The safe harbor agreement commits organizations to treating good-faith AI research as authorized and refraining from legal action against researchers who follow specific conditions.
These conditions mirror traditional security programs: researchers cannot withhold findings for payment, exfiltrate data, cause unnecessary damage, or reverse-engineer systems to build competing services. The framework provides standardized authorization, removing uncertainty for both organizations and researchers.
Criminals Aren't Immune to Security Failures
In a reminder that security failures affect everyone, cybersecurity researcher Jeremiah Fowler discovered 149 million unique login and password combinations exposed online. The 96 GB dataset contained credentials from social media platforms, dating apps, streaming services, financial institutions, and even government accounts from multiple countries.
The data appeared to have been harvested using infostealer and keylogging malware, then left publicly accessible for nearly a month before Fowler could get the host to secure it. This exposure created a secondary risk: while the data was unsecured, anyone could have accessed it, potentially including other cybercriminals.
Fowler noted that the dataset differed from previous infostealer malware collections he had examined, suggesting evolving tactics among cybercriminals. The incident serves as a stark reminder that even those who specialize in compromising systems often fail at basic security hygiene.
The Road Ahead
These developments collectively signal a maturing digital security landscape. Automotive manufacturers must now consider that their vehicles are effectively computers on wheels, vulnerable to the same types of exploits that plague traditional IT systems. The Pwn2Own Automotive competition provides a controlled environment to identify these vulnerabilities before malicious actors exploit them in the wild.
Meanwhile, the privacy fine and AI security frameworks highlight the growing regulatory and ethical complexity of modern technology. As AI systems become more integrated into daily operations, the need for clear security testing guidelines becomes paramount. The HackerOne safe harbor agreement represents an industry-led effort to establish norms before regulations force compliance.
For consumers, these events underscore the importance of vigilance. The automotive vulnerabilities demonstrate that connected vehicles require the same security awareness as smartphones or computers. The exposed credential database reinforces the need for unique passwords and multi-factor authentication. And the Gemini calendar exploit shows that even sophisticated AI systems have blind spots that can be exploited.
The automotive industry now has 76 new reasons to accelerate security reviews and patch development. The privacy regulator's fine sends a clear message about the consequences of data mishandling. And the AI security frameworks provide a roadmap for responsibly testing increasingly complex systems. In each case, the message is consistent: security cannot be an afterthought in a connected world.

Comments
Please log in or register to join the discussion