Search Articles

Search Results: PromptInjection

CometJacking Attack Exposes Critical Flaw in AI Browser Security, Stealing Emails Via Crafted URLs

CometJacking Attack Exposes Critical Flaw in AI Browser Security, Stealing Emails Via Crafted URLs

Security researchers reveal 'CometJacking'—a novel attack exploiting Perplexity's AI-powered Comet browser to steal sensitive user data like emails and calendar entries through malicious URL parameters. Despite proof-of-concept validation showing encoded data exfiltration bypassing safeguards, Perplexity dismissed the vulnerability as 'not applicable,' raising concerns about autonomous agent security.
LegalPwn: How Buried Legalese Becomes an LLM Jailbreaking Tool

LegalPwn: How Buried Legalese Becomes an LLM Jailbreaking Tool

Security researchers at Pangea have uncovered 'LegalPwn,' a novel attack exploiting AI models' deference to legal language. By embedding malicious instructions within verbose legal disclaimers, attackers can bypass guardrails in popular LLMs like GPT-4o and Gemini, tricking them into approving harmful code execution. This vulnerability highlights critical risks as AI integrates deeper into security-sensitive systems.
Hidden in Plain Sight: How Image Resampling Exposes AI Systems to Stealthy Prompt Injection Attacks

Hidden in Plain Sight: How Image Resampling Exposes AI Systems to Stealthy Prompt Injection Attacks

Researchers have uncovered a novel attack vector where malicious prompts are hidden within seemingly benign images, only to be revealed and executed when AI systems downscale the images for processing. This technique exploits fundamental image resampling algorithms, allowing attackers to manipulate platforms like Google Gemini and Vertex AI into performing unauthorized actions, such as exfiltrating sensitive data. The discovery underscores a critical and evolving threat to the security of multimodal AI systems increasingly integrated into enterprise workflows.
Beyond Vibe Coding: Design Patterns to Fortify AI Agents Against Prompt Injection

Beyond Vibe Coding: Design Patterns to Fortify AI Agents Against Prompt Injection

As AI agents gain tool access and permissions, prompt injection attacks threaten critical systems—with even Microsoft and Atlassian falling victim. This analysis explores six architectural patterns and security best practices to defend against these exploits while balancing utility, featuring insights from real-world vulnerabilities and mitigation strategies.
Gemini Hijacked: How Poisoned Calendar Invites Turned Google's AI Into a Smart Home Saboteur

Gemini Hijacked: How Poisoned Calendar Invites Turned Google's AI Into a Smart Home Saboteur

Security researchers have demonstrated the first physical-world attack executed via generative AI, hijacking Google's Gemini to control smart home devices through poisoned calendar invites. The exploit reveals critical vulnerabilities in AI agents as they gain real-world control, forcing Google to accelerate new defenses against 'indirect prompt injection' attacks.