Cyberslop and the AI Fear Machine: How MIT and Safe Security Became a Cautionary Tale
Share this article
Cyberslop and the AI Fear Machine: How MIT and Safe Security Became a Cautionary Tale
When MIT puts its name on a cybersecurity study, CISOs pay attention. When that study claims "80% of ransomware attacks use Generative AI," it doesn't just spark attention—it drives budget decisions, board briefings, and product roadmaps.
There’s only one problem: in this case, it wasn’t true.
What unfolded around the MIT–Safe Security working paper is more than an academic misstep. It’s a warning shot for a security industry being poisoned by what security researcher Kevin Beaumont has aptly branded: "cyberslop"—the profitable blending of authority, AI hype, and baseless claims into something that looks like research, reads like research, but behaves like marketing.
This isn’t a gossip item. It’s an integrity failure with direct consequences for how defenders prioritize risk.
The Claim That Broke the Sniff Test
Earlier in 2025, MIT’s CAMS (Cybersecurity at MIT Sloan) published a working paper, co-branded with Safe Security, asserting that roughly 80% of ransomware attacks involved Generative AI. It lived on an MIT domain, was credited to MIT and Safe Security staff, and circulated widely across:
- MIT-hosted pages and PDFs
- Conference presentations
- Cybersecurity press coverage
- Mainstream outlets like the Financial Times, which later linked to a quietly edited page
For many non-frontline stakeholders, that was enough. It had the right logos, the right acronyms, the right anxiety. CISOs started forwarding it, vendors started referencing it, and the "AI is now driving almost all ransomware" narrative gained oxygen.
Then practitioners actually read it.
The paper:
- Attributed AI usage to a broad swath of ransomware groups without evidence.
- Treated historic malware (e.g., Emotet) as if it were AI-powered or AI-adjacent.
- Muddled categories—calling or implying trojans are ransomware groups, conflating tooling and operators.
- Cited sources (including CISA) that did not substantiate the core claims.
The central premise—that analysis of 2,800 incidents showed 80% were GenAI-related—had no defensible technical grounding.
This wasn’t just an overreach. For people who actually work ransomware cases, it was unrecognizable.
Silent Edits, Missing Disclosures, and Broken Trust
Once Beaumont and others publicly dissected the paper, things moved fast—but not in the direction of transparency.
Key issues:
Undisclosed conflicts of interest:
- Michael Siegel (MIT) is on Safe Security’s technical advisory board.
- Other MIT-affiliated individuals are similarly entangled.
- These relationships were reportedly not disclosed in the PDF that carried MIT’s institutional authority.
Scrubbing instead of correcting:
- The original PDF disappeared from the live site after public criticism.
- Pages were rewritten to soften or entirely remove the GenAI-focused claims.
- The material was reframed as part of an "Early Research Papers" section—after the fact—without a clear, front-and-center correction.
- External coverage, including the FT, pointed to now-sanitized URLs, leaving a mismatch between what was cited and what exists.
No visible acknowledgement:
- No prominent erratum.
- No clear explanation of what was wrong in the methodology or claims.
- No public ownership of how this slipped through, despite the scale of its reach.
For a university that trains future leaders in technical disciplines, this is not a formatting issue. It’s an institutional credibility issue.
If an MIT-branded report can morph this easily from "AI drives most ransomware" to "never mind" without explicit accountability, defenders are justified in asking: what else are we not supposed to notice?
Cyberslop: When AI Hype Becomes a Business Model
Beaumont’s term "cyberslop" is precise: the industrial production of low-integrity cyber narratives, dressed up with:
- Reputable brands (MIT logos, labs, centers)
- Fear-forward language around AI, "agentic" systems, and emergent threats
- Vendor-aligned calls to action that steer readers to specific products or platforms
Safe Security openly markets "agentic AI" security solutions. The contested MIT-linked report:
- Elevated GenAI as a newly dominant, urgent ransomware driver.
- Declared the need for new approaches aligned with GenAI-era risk.
- Conveniently reinforced Safe Security’s positioning as a solution designed "with MIT."
The alignment of:
- Sensational threat framing,
- Vendor financial interest, and
- Lack of conflict disclosure
is not subtle. It’s exactly the kind of pattern we expect defenders to detect in adversary tradecraft; we should be at least as vigilant when it’s our own industry.
And this isn’t limited to one vendor or one lab. Many security companies now deploy the same playbook:
- Assert that "threat actors are adopting AI at scale" (often without concrete incident-response data).
- Use that assertion to frame their AI-powered platform as a mandatory control.
- Omit the fact that leading IR firms and annual threat reports still show the same primary drivers: credential theft, unpatched edge devices, phishing, poor segmentation, flat networks, weak recovery.
AI can and will be abused by attackers—but turning "may" into "80% of ransomware" without evidence is not foresight. It’s fabrication.
What the Real Data Actually Says
Talk to teams who live inside breaches—incident responders, DFIR specialists, MDR analysts—and a very different picture emerges:
- Initial access is still dominated by:
- Stolen credentials (often from infostealers)
- Exposed or unpatched VPNs, firewalls, and web apps
- Basic phishing, often with mediocre lures
- RDP abuse and lateral movement via weak internal hygiene
- Ransomware affiliates optimize what works: tried-and-true playbooks, not bleeding-edge AI research.
- Most observed GenAI usage in the wild today is incremental:
- Better-crafted phishing templates
- Faster content generation for scams
- Occasional script/loader assistance by low-skill actors
Important nuance for technical readers:
- None of this justifies "80% of ransomware is GenAI-driven."
- None of this reverses the hierarchy of controls: identity, patching, segmentation, backups, logging, EDR/XDR, hardening, and practiced IR remain overwhelmingly more impactful than any AI-specific countermeasure.
Overstating AI-driven ransomware doesn’t merely distort a statistic; it distorts priorities. It nudges organizations to:
- Over-index on buying "AI security" products.
- Under-invest in foundational resilience that would limit both traditional and AI-enhanced attacks.
That distortion is the real damage.
Why This Matters for Engineers and Security Leaders
If you write code, run infrastructure, or own security strategy, the MIT–Safe Security episode is not background noise. It directly affects how your environment gets defended—and how your time and budget get justified.
Key implications:
Erosion of trust in technical authority
- When respected institutions amplify vendor-aligned hype without rigor, defenders must second-guess even high-prestige sources.
- This increases cognitive load for security teams already drowning in reports, advisories, and pitches.
Misallocation of scarce resources
- Boards and executives see headlines, not DFIR case notes.
- A dramatic but flawed number from an MIT-branded paper can reroute millions from:
- Basic IAM improvements
- Legacy system isolation
- Patch and config automation
- Backup and recovery modernization
toward speculative "AI risk" tooling that doesn’t address their actual incidents.
Weaponized uncertainty
- Ambiguous claims like "threat actors are adopting AI to scale attacks" (without concrete TTPs, IOCs, or case studies) keep buyers nervous but uninformed.
- Nerves sell software. They do not secure systems.
For a technical community that prides itself on measurement, reproducibility, and verifiable evidence, accepting cyberslop is professional negligence.
How to Smell Cyberslop Before It Pollutes Your Roadmap
You don’t need a threat intel team dedicated to "AI narratives" to defend against this. You need a discipline of verification—applied as ruthlessly to vendor and academic claims as to attacker infrastructure.
A practical checklist for security leaders, architects, and engineers:
Ask for the underlying data
- If a paper or vendor claims "X% of attacks now use AI," demand:
- Sample sizes
- Data collection methods
- How "AI use" was defined and validated
- If they can’t answer clearly, treat it as marketing, not telemetry.
- If a paper or vendor claims "X% of attacks now use AI," demand:
Check alignment with independent incident-response reporting
- Compare claims against:
- DFIR firms’ yearly reports
- Your own SOC / IR data
- Public advisories from entities like CISA where methodology is transparent
- Look for convergence. Lone, spectacular numbers should be suspect.
- Compare claims against:
Inspect conflicts of interest
- Are named researchers sitting on the board or advisory panel of the sponsoring vendor?
- Is that disclosed prominently in the paper itself, not buried in a bio?
- Are recommendations suspiciously tailored to one product category or provider?
Look for precise TTPs, not abstract anxieties
- Credible AI-threat research:
- Names specific malware families, infra, campaigns, or techniques.
- Gives examples: prompts, models, logs, binaries.
- Cyberslop handwaves:
- "Attackers are increasingly leveraging AI" with no investigative artifacts.
- Credible AI-threat research:
Watch for silent edits
- If a high-profile report is altered without changelogs or errata:
- Consider that a signal of weak governance.
- Archive early copies for your own internal reference.
- If a high-profile report is altered without changelogs or errata:
Institutional prestige doesn’t get a free pass. In 2025, "we saw it on a .edu or in a glossy PDF" is not a control.
A Better Standard for AI-Driven Threat Intel
The AI-in-security conversation desperately needs rigor—but that doesn’t mean downplaying risk. It means characterizing it correctly.
What credible AI threat research should look like:
- Clear taxonomy:
- Differentiate between:
- AI-assisted social engineering
- AI-assisted vulnerability discovery
- AI-assisted malware development
- AI-as-infrastructure abuse (e.g., abusing LLM APIs)
- Differentiate between:
- Observable behaviors:
- Show logs, samples, screenshots, or IR narratives that demonstrate AI involvement.
- Bounded claims:
- Use language like "observed in X% of our dataset" with context, not sweeping industry-wide declarations without basis.
- Separation of powers:
- If a commercial entity funds the work, treat it as sponsored research with full disclosure.
- Include external reviewers who actually respond to incidents.
There is real work to be done:
- Building detection for AI-shaped phishing and fraud campaigns.
- Modeling how LLMs might compress attacker learning curves.
- Hardening CI/CD and MLOps pipelines against data and model poisoning.
- Understanding where AI gives defenders more leverage than attackers.
But every time we inflate fiction into "80% of ransomware," we make it harder for real research—and real defenders—to be heard.
Reclaiming the Narrative from the Slop
The MIT–Safe Security controversy is not just about one flawed paper quietly retired. It’s a mirror.
It reflects a security ecosystem where:
- Vendor marketing can masquerade as neutral science.
- Institutions can lend their brand without enforcing their standards.
- Sensational AI narratives eclipse the boring, solvable problems that actually burn companies down.
Practitioners do not have to accept this.
You can:
- Publicly question numbers that don’t align with real incident patterns.
- Insist your organization’s strategy is grounded in observed TTPs, not vendor storyboards.
- Treat "cyberslop" as a diagnostic label—whenever an AI-threat narrative arrives without the data and transparency your craft requires.
If there’s a constructive outcome to this episode, it won’t be another outraged thread or a quietly replaced PDF. It will be a cultural reset where technical communities demand that anyone claiming authority on AI and security—be they a startup or MIT—meets the same evidentiary bar we already expect from ourselves when we ship code, run infra, or publish an incident report.
That’s not just good journalism or good science. It’s the baseline for defending a world already noisy enough without the slop.
Source: Original reporting and analysis based on Kevin Beaumont’s "CyberSlop — meet the new threat actor, MIT and Safe Security" (DoublePulsar, Nov 3, 2025), related public archives, and community responses.