Hidden in Plain Sight: How Image Resampling Exposes AI Systems to Stealthy Prompt Injection Attacks
Researchers have uncovered a novel attack vector where malicious prompts are hidden within seemingly benign images, only to be revealed and executed when AI systems downscale the images for processing. This technique exploits fundamental image resampling algorithms, allowing attackers to manipulate platforms like Google Gemini and Vertex AI into performing unauthorized actions, such as exfiltrating sensitive data. The discovery underscores a critical and evolving threat to the security of multimodal AI systems increasingly integrated into enterprise workflows.