#AI

Stop Sloppypasta: Why Pasting Raw AI Output Is Digital Rudeness

Startups Reporter
5 min read

Sharing unedited AI-generated text without context or verification creates effort asymmetry and erodes trust - here's why it's problematic and how to use AI responsibly.

The Problem with Raw AI Output Sharing

You've seen it happen: a notification pops up, you open a message, and find yourself staring at several paragraphs of AI-generated text - complete with bullet points, numbered lists, and that distinctive "it's not X, it's Y" phrasing that screams "written by a chatbot." The sender probably spent about ten seconds on it, asking a chatbot a question and forwarding the response verbatim without reading it themselves.

This phenomenon - let's call it "sloppypasta" - represents a fundamental breakdown in digital communication etiquette. When someone forwards text they haven't personally considered, they're asking you to do work they chose not to do. The asymmetric effort makes it rude.

Why It's a Problem

The Effort Imbalance

Before large language models, writing required genuine effort. Authors spent time considering and selecting their words with intention. This effort was balanced by the time readers spent consuming the content. That balance is now broken - the effort to produce text is effectively free, but the effort required to read hasn't changed.

LLMs are incentivized to be verbose. API-priced models have a per-token incentive to train chatty LLMs that use many tokens, and research shows that longer, highly formatted posts are often preferred as more engaging. This further increases the effort asymmetry.

Trust and Credibility Issues

Modern LLMs are trained to "be helpful," which explains their propensity for hallucination (confabulation) and why many people feel that LLMs are bullshit generators. Even when provided with tools to look up grounding information, they can still produce outdated facts, wrong figures, and plausible nonsense.

When you share raw AI output, you're borrowing your own credibility. If the content turns out to be wrong, that credibility is what gets spent. The AI's authoritative tone removes the signal recipients previously used to distinguish expertise from plausible-sounding slop.

Cognitive Debt

Writing is thinking. The writing process forces the author to work through their thoughts, building comprehension and retention. Multiple studies have found that delegating tasks to LLMs creates cognitive debt. Shortcutting thinking with LLMs ultimately reduces comprehension of and recall about the delegated subject.

Common Examples of Sloppypasta

The Eager Beaver

A conversation participant wants to contribute to the topic at hand, so they ask a chatbot and share whatever comes back. The intention is good - they genuinely want to help - but the wall of generic AI text they contributed blocks the discussion already underway.

The OrAIcle

Someone asks a specific question. Another person puts it into a chatbot and pastes the response as the answer. "ChatGPT says" is the enshittified LLM-era equivalent of LMGTFY (Let Me Google That For You). Recipients are left to figure out whether it's AI generated, whether it's correct, and which part actually answers the question.

The Ghostwriter

The sender shares AI output as their own work, with no indication a chatbot wrote it. Recipients have no reason to question it and may act on information that is out of date, incomplete, or simply wrong.

The Feedback Loop

Sloppypasta creates a compounding negative feedback loop where the sender forfeits learning and credibility while the recipient burns effort and loses trust. Receiving raw AI output feels bad due to the cognitive dissonance of having these assumptions violated.

As one observer noted: "For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity."

Simple Guidelines for Responsible AI Use

Read First

Read the output before you share it. If you haven't read it, you don't know whether it's correct, relevant, or current. Delegating work to AI creates cognitive debt. Working with the results helps run damage control for your own understanding.

Verify

Check the facts before you forward them. Anything you forward carries your implicit endorsement - your reputation depends on managing the quality of what you share. LLMs are trained to "be helpful" and will produce outdated facts, wrong figures, and plausible nonsense to provide a response to your requests.

Distill

Cut the response down to what matters. Distilling the generated response to the useful essence is your job. LLMs are incentivized to use many words when few would do.

Disclose

Share how AI helped. If you've read, verified, and edited it, send it as yours - preferably with a note that you worked with AI assistance. If you're sharing raw output, say so explicitly. Disclosure restores the trust signals that sloppypasta destroys and tells the recipient what you checked and what they may be on the hook for.

Share Only When Requested

Never share unsolicited AI output into a conversation. Remember that AI generations create effort asymmetry and be respectful of those you share with.

Share AI output as a link or attached document rather than dropping the full text inline. In messaging environments, a large paste takes over the viewport and crowds out the existing conversation. A link lets the recipient choose when - and whether - to engage.

The Bottom Line

AI capabilities keep increasing, and using it to draft, brainstorm, or accelerate your work will be increasingly useful. However, using AI should not make your productivity someone else's burden. New tools require new manners.

Use AI to accelerate your work or improve what you send. Don't use it to replace thinking about what you're sending. Your credibility, your relationships, and your own understanding all depend on it.

Comments

Loading comments...