In the vast, ever-expanding ocean of online content, a new tide is rising, and it's not made of water. It's a slurry of machine-generated text, images, and ideas, colloquially known as 'AI slop.' For developers, engineers, and tech leaders who rely on quality information to stay ahead, this deluge presents a serious challenge. The signal-to-noise ratio is plummeting, making it harder than ever to find genuinely insightful articles amidst a deafening roar of algorithmically regurgitated content.

As Robin Moffatt, a Developer Advocate at Confluent, recently detailed in a post on his blog, the problem has reached a critical mass. While low-quality content has always existed, the barrier to entry for producing it has collapsed. Previously, even the most shameless plagiarist had to manually find, copy, and paste content. Now, with a Medium account and access to a Large Language Model (LLM), anyone can pump out dozens of articles in a single day. This automated content generation is not just an annoyance; it's actively polluting the information ecosystem that professionals depend on.

Moffatt, who curates a monthly list of interesting links, has developed a keen sense for the 'smells' of AI-generated content. By analyzing patterns in titles, preview images, and article body, he has created a heuristic for weeding out the slop before wasting precious time on it. His analysis offers a valuable guide for any tech professional looking to navigate the current content landscape.

Step 1: The Title – The First Whiff of Trouble

The title is the first handshake between an article and its potential reader. For Moffatt, it's also the first checkpoint in his content triage process. He uses an RSS reader to scan headlines, and certain patterns are immediate red flags.

The Emoji Overload:

✨⚡🤔 Emojis❗ 💡💪
Humans can use them too, but LLMs love them. Add +2 to the smell-o-meter.

While humans use emojis sparingly for emphasis, LLMs have been trained on vast datasets of social media and marketing copy where they are ubiquitous. An article title saturated with sparkles, lightning bolts, and thinking-face emojis is often a strong indicator of machine generation.

The 'HoT TakE' in Unicode:

𝓤𝓷𝓲𝓬𝓸𝓭𝓮 𝒇𝒐𝒓𝒎𝒂𝒕𝒕𝒊𝒏𝒈 𝐭𝐞𝐱𝐭 𝓮𝒇𝒇𝒆𝒄𝓽𝓼

This stylistic choice, often used to create a sense of urgency or controversy, is another common trope. The effect is frequently more comical than compelling, resembling a 'hot take' that is 'about as hot as cold cat sick.'

The Regurgitated 'How-To':

'How to use $OLD_TECHNOLOGY'

Titles following this formula are less a sign of AI and more of a symptom of content farms. They suggest the article is a generic, rehashed tutorial offering little new value to an experienced audience.

The Clickbait Hyperbole:

'We replaced Kafka with COBOL and shocked everyone'
'I replaced Kafka with happy puppies and halved our cloud bills'

This is perhaps the most pungent smell of all. LLMs excel at generating sensational, implausible headlines designed purely for clicks. Moffatt notes that articles with these titles are '100% made up,' promising revolutionary results that defy logic and experience.


alt="Article illustration 5"
loading="lazy">

### Step 2: The Preview Image – A Picture of a Thousand Words After a title piques interest, the preview image is the next data point. As RSS feeds often only provide a snippet, the image can make or break the decision to click. **The 'Boomer Art' Header:** > The first huge rotten stinky smell is the AI-generated header image. What started as a novel or witty use of AI for imagery has become a tired cliché. The generic, glossy, abstract 3D render with floating geometric shapes and a gradient background—dubbed 'boomer art'—has lost all meaning. Like MS WordArt in the 2000s, these images have become a visual shorthand for low-effort content.

alt="Article illustration 4"
loading="lazy">

**The Spelling Error:** > If the image also has spelling errors, then do not pass go, do not collect 200 page views, go straight to jail. A spelling error in an AI-generated image is a dead giveaway. It signifies that the author used a text-to-image model and couldn't be bothered to perform a basic quality check. If the quality bar for the header image is this low, what does it imply for the rigor of the article itself? **The Nonsensical Word Salad:** > Second to spelling errors are nonsensical word-salad text diagrams. Also a red flag. Images filled with buzzwords like 'synergy,' 'disrupt,' and 'leverage' arranged in meaningless diagrams are another hallmark of AI slop. They signal a lack of genuine understanding or effort. ### Step 3: The Article – The Deep Dive (or Lack Thereof) Even after passing the title and image filters, the article body itself can reek of AI generation. Moffatt admits to being 'shallow and picky,' but for good reason. These final smells often confirm the initial suspicion. **The Oddly-Specific yet Unspecific Opening:** > Our event-streaming cluster was sputtering during partition reshuffles. Every time a subscriber crashed or another replica spun up, the whole consumer cohort stalled for roughly ten to twenty seconds. Tasks stacked, retries swamped the failure queue, and the duty engineer was alerted several times weekly. We replaced the broker with a wire-compatible alternative, kept the identical protocol and client SDKs, and saw p95 latency slide from 360ms to 180ms while retry volume fell to none. This opening is technically detailed but lacks context. Who is 'we'? Is this a real company's case study or an anonymous fabrication? The absence of any author or company affiliation is a major red flag. Another common AI opening is the '$thing had been happening for months. We kept throwing money at it. Then this one weird thing happened that changed everything' trope, which feels formulaic and unoriginal. **The ASCII Art Diagram:** > Next up is a real stinker that has so far given me 100% detection rate: ASCII art diagrams. While nostalgic for those who came of age with BBS systems, ASCII art diagrams in modern technical articles are a strong indicator of AI authorship. It's easier for an LLM to generate a block of text that *looks* like a diagram than it is for a human to create a clear visual in a tool like Excalidraw.
        [ microservice-a ]
                |
                v
           ( Kafka )
          /    |    \
         v     v     v
[ microservice-b ][ microservice-c ][ microservice-d ]
         |               |                 |
         v               v                 v
     ( Kafka ) ------ ( Kafka ) ------ ( Kafka )
         ^               ^                 ^
         |               |                 |
     [ microservice-e ][ microservice-f ][ microservice-g ]

**The Shallow Deep Dive:**

Deep-dive content that’s only a few paragraphs long.

An article promising a deep technical analysis of a complex system like Kafka, yet which concludes in just four or five paragraphs, is almost certainly AI-generated. A genuine deep dive requires explaining the system, the problem, attempted solutions, the fix, and the results—a process that takes time and space. The AI version feels like 'eating white bread; your mouth knows it’s consumed several slices, but your brain is confused because your stomach is still telling it that it’s empty.' **The Unrealistic $NEW_TECH Hype:**

'We rewrote Kafka in Go/Rust/etc in 20 lines'; the occasional one is true, most are BS.

Articles claiming to have replaced a mature, complex technology with a new one in a weekend, or with a handful of code, are often fantasy. LLMs, trained on platforms like Hacker News, know this is a popular narrative and will happily generate it, complete with exaggerated claims of cost savings and performance gains. **The Usual AI Tells:**

Bullet point paragraphs

Oh my sweet, much-maligned—and unfairly so—em-dashes. I write with them for real, unfortunately so do the AI slop machines 😢

Emojis

Short section headings

These are the smaller, more subtle cues. The overuse of em-dashes, the insertion of emojis mid-paragraph, and the proliferation of short, choppy headings are stylistic quirks that LLMs often mimic.

The Author Profile – The Final Verdict

All these smells might be circumstantial, but the author profile can often provide the final, damning evidence. Good technical content takes time to write and requires deep expertise. Yet, some AI-farmed Medium profiles defy this reality. Consider an author who publishes the following in a single week: - Java 21 Made My Old Microservice Faster Than Our New Go Service - Bun Just Killed Node.js For New Projects — And npm Did Not See It Coming - Tokio Made My Rust Service 10x Faster — Then It Made My Life 10x Harder - The 10x Engineer Is Real. I’ve Worked With Three - Redis Is Dead: How We Replaced It With 200 Lines of Go - Why Senior Engineers Can’t Pass FizzBuzz (And Why That’s Fine) The breadth and volume of this output are impossible for a single human expert. A quick check of their LinkedIn profile might reveal a junior engineer with six months of experience, making their claims of re-architecting production systems overnight highly suspect.

The Enshittification is Here, and AI is Making It Worse

The term 'enshittification,' coined by Cory Doctorow, describes the process by which a platform, once useful, is systematically degraded to benefit its owners at the expense of its users. For the open internet, the rise of AI-generated content is a primary driver of this phenomenon.

The Enshittification is here and AI is making it much, much, worse.


Crap content has always existed, but there was a cost to producing it. Now, that cost is zero. A 'muppet with a Medium account and an LLM' can flood the zone with low-quality, often factually incorrect, articles. This automated process threatens to drown out genuine voices and make the discovery of high-quality information a herculean task. The beauty of an open internet where anyone can publish is being overshadowed by the sheer volume of noise, making curation and critical thinking more essential than ever.