Search Articles

Search Results: AIsecurity

Cryptography Reveals Fundamental Flaws in AI Safety Filters

Cryptography Reveals Fundamental Flaws in AI Safety Filters

Cryptographers have demonstrated that external protections for AI models like ChatGPT are inherently vulnerable to bypass attacks. Using cryptographic tools such as time-lock puzzles and substitution ciphers, researchers prove that any safety filter operating with fewer computational resources than the core model can be exploited, exposing an unavoidable gap in AI security.

The AI Confidentiality Crisis: When Client Data Leaks Through Automation

As enterprises increasingly deploy AI for document generation and administrative tasks, security teams face an alarming dilemma: commercially sensitive client information embedded in codebases and platforms becomes nearly impossible to redact before processing. This exposes critical vulnerabilities in workflows leveraging tools like Atlassian MCP, forcing a reckoning with AI's hidden data governance risks.
Task Injection: The Emerging Threat Targeting Autonomous AI Agents

Task Injection: The Emerging Threat Targeting Autonomous AI Agents

Google researchers reveal a new vulnerability class called 'Task Injection' that compromises autonomous AI agents by manipulating their natural language instructions. Attackers can hijack agent workflows through poisoned inputs like calendar events or emails, forcing unintended actions. This represents a fundamental security challenge as agentic AI systems become increasingly integrated into business operations.
CometJacking Attack Exposes Critical Flaw in AI Browser Security, Stealing Emails Via Crafted URLs

CometJacking Attack Exposes Critical Flaw in AI Browser Security, Stealing Emails Via Crafted URLs

Security researchers reveal 'CometJacking'—a novel attack exploiting Perplexity's AI-powered Comet browser to steal sensitive user data like emails and calendar entries through malicious URL parameters. Despite proof-of-concept validation showing encoded data exfiltration bypassing safeguards, Perplexity dismissed the vulnerability as 'not applicable,' raising concerns about autonomous agent security.