Hidden Prompts Expose AI-Generated Peer Reviews: Turning Vulnerabilities into Verification Tools
New research reveals how Large Language Models can be manipulated to bias scientific peer reviews through hidden prompts embedded in PDFs. In a countermove, editors can use similar techniques to detect AI-generated reviews, transforming a security flaw into a safeguard for academic integrity.