Search Articles

Search Results: LLMsecurity

Scammers Hijack AI Search Results with Phone Number Poisoning

Scammers Hijack AI Search Results with Phone Number Poisoning

Cybercriminals are exploiting AI search tools by poisoning public web content with fraudulent customer support numbers, creating a new security risk as AI assistants increasingly become the primary source of information for users.

The AI Arms Race: Why Open Source Models Are Changing Everything

As proprietary AI giants dominate headlines, a quiet revolution is brewing in open-source language models. This shift promises democratization of AI but introduces critical security and governance challenges that demand immediate attention from developers and enterprises.
The Transcript Trap: How ‘Helpful’ LLMs Keep Falling for Prompt Injection

The Transcript Trap: How ‘Helpful’ LLMs Keep Falling for Prompt Injection

A deceptively simple ‘transcript hack’ reveals why modern language models remain fundamentally vulnerable to prompt injection—even when wrapped in structured protocols and safety layers. Underneath the tooling, next-token prediction plus smart generalization is still in charge, and that’s precisely the problem.
AI Chatbots Amplify Sanctioned Russian Propaganda in War Disinformation Campaigns

AI Chatbots Amplify Sanctioned Russian Propaganda in War Disinformation Campaigns

New research reveals that leading AI chatbots, including OpenAI's ChatGPT and Google's Gemini, are citing sanctioned Russian propaganda sources in responses about the Ukraine war. This exploitation of data voids highlights critical vulnerabilities in large language models and raises urgent questions about AI's role in spreading disinformation amid EU regulatory scrutiny.
Over 1,100 Exposed Ollama Servers Found: A Critical AI Security Wake-Up Call

Over 1,100 Exposed Ollama Servers Found: A Critical AI Security Wake-Up Call

Cisco researchers uncovered widespread security lapses in large language model deployments, identifying over 1,100 publicly accessible Ollama servers vulnerable to unauthorized access and prompt injection. The study leverages Shodan scanning to reveal how default configurations enable risks like model theft and resource hijacking, demanding urgent industry-wide security reforms.
Hidden in Plain Sight: How Image Resampling Exposes AI Systems to Stealthy Prompt Injection Attacks

Hidden in Plain Sight: How Image Resampling Exposes AI Systems to Stealthy Prompt Injection Attacks

Researchers have uncovered a novel attack vector where malicious prompts are hidden within seemingly benign images, only to be revealed and executed when AI systems downscale the images for processing. This technique exploits fundamental image resampling algorithms, allowing attackers to manipulate platforms like Google Gemini and Vertex AI into performing unauthorized actions, such as exfiltrating sensitive data. The discovery underscores a critical and evolving threat to the security of multimodal AI systems increasingly integrated into enterprise workflows.