Beyond “Bad Code”: How High-Performing Teams Turn Vague Complaints into Measurable Quality
Share this article

- Insecure or easily exploitable
- Bug-prone and brittle
- Inefficient to the point of impacting UX or cost
- Out of sync with agreed team conventions
Each of these is contextual. A thrown-together script for a one-off migration is held to a different standard than the payments service or an AI inference pipeline touching production data. But across contexts, the same red flags recur.
1. When the Code Lies to Its Readers
Unreadable code is the root of many downstream failures. Misleading names, 500-line god functions named doStuff, inconsistent indentation, and tangled control flow all weaponize cognitive load against your team.
For a modern engineering org, this isn’t aesthetic nitpicking. Readability is an availability and latency concern:
- Slow reviews → slower delivery.
- Misunderstood behavior → production regressions.
- Onboarding drag → higher cost per feature.
Linters and IDE hints help, but they’re only effective when tied to a shared, enforced definition of how your code should look and behave.
2. Maintainability: The Hidden Interest Rate
Bad code and high technical debt are not synonymous, but they compound each other.
Tightly coupled logic—web handling, DB access, business rules, formatting—folded into a single function increases the blast radius of every change. A 20-minute tweak becomes a multi-day bug hunt.
Studies routinely show teams spending a double-digit percentage of their time on avoidable debt; Qodana’s article cites a 2022 study estimating up to 25% of development time. For leaders, that’s not just an engineering concern; it’s a P&L problem.
The real signal: if your team hesitates to touch parts of the codebase, you’re not dealing with isolated “bad files.” You’re carrying structural risk.
3. Scalability: When “Works on My Machine” Is Systemic Negligence
Code that passes functional tests can still be "bad" if it silently encodes scaling failure:
- N+1 queries lurking in loops.
- Blocking calls on hot paths.
- Linear-time work repeated per request with no caching.
These don’t show up as bugs in staging; they appear as outages on launch day. As traffic scales, "bad" performance patterns become existential.
High-performing teams treat scalability as a dimension of code quality, not an afterthought reserved for SREs.
4. Security: In the Age of AI, Sloppy Means Dangerous
Qodana’s emphasis here is blunt and accurate: unsafe code is bad code.
An unsanitized SQL string concat might feel like a junior mistake, but under AI-driven development and “shadow AI” coding patterns, these mistakes are easier than ever to introduce at scale. With LLMs generating code and devs pasting snippets at speed, organizations that lack guardrails are effectively crowdsourcing vulnerabilities.
Security must be:
- Embedded in code review.
- Enforced through static analysis and policies.
- Defined as a non-negotiable dimension of what "good" looks like.
Otherwise, "works" in QA becomes "leaks" in production.
5. Fragility and Bug Magnetism
Lev Liadov of the Qodana frontend team captures a feeling every engineer knows:
“Bad code is code that forces you into an infinite cycle of fixing one part and breaking another.”
That cycle usually correlates with:
- Lack of tests—especially around critical paths.
- Hidden coupling—changes in one module unexpectedly impact others.
- Ad hoc error handling that masks real failures.
Once a file or service gains that reputation, every sprint plan that depends on it becomes fiction.
6. Style Drift and Process Friction
Non-standard formatting and workflows are deceptively expensive. When code ignores agreed conventions, it forces:
- Reviewers to context-switch and normalize.
- Tools to misbehave (diff noise, merge pain).
- Teams to re-litigate decisions that should be settled.
"It runs" is not the bar. If it erodes shared norms, it’s bad for the system—even if the function is technically correct.
Why Defining “Bad Code” Is Now a Strategic Decision
What Qodana’s piece gets right—and where many teams stumble—is the insistence that "bad" and "good" must be defined locally, explicitly, and operationally.
A payment processor’s "bad code" profile is different from a research prototype’s. A regulated healthcare platform’s risk tolerance is not a game studio’s. But inside each environment, the rules must be clear.
Codifying what your org considers unacceptable:
- Sets expectations for new hires and vendors.
- Aligns architecture, security, and product on trade-offs.
- Turns subjective complaints into actionable, measurable gaps.
Without that clarity, your reviewers are arguing taste, not risk.
From Vibes to Signals: Practical Guardrails Against Bad Code
The Qodana article surfaces familiar best practices. What matters is how they interlock into a system.
Here’s how high-maturity teams operationalize them.
1. Start With Intent, Not Syntax
Good code is a correct answer to a fully understood problem.
- Require short written problem summaries for non-trivial work.
- Make inputs, outputs, and constraints explicit.
This shifts the conversation from "clever solution" to "fit-for-purpose solution," which is where quality starts.
2. Prefer Simple Over Clever (Especially Under Pressure)
If a developer cannot explain a function in under a minute, it’s a design smell.
- Ban heroically clever, undocumented hacks on critical paths.
- Optimize for clarity first; micro-optimize hotspots when you have profiling data.
3. Enforce DRY and Small, Focused Units
The rule of three is still underrated:
- See the same pattern 3 times? Extract it.
- Keep functions cohesive and short enough to test in isolation.
This is not purism; it’s incident containment.
4. Names as a First-Class Design Tool
Developers read names far more than they read implementations.
- Treat naming as part of design review.
- For critical domains (billing, auth, AI safety), define a glossary and enforce it in code.
5. Testing as a License to Refactor
Without tests, refactoring is vandalism with good intentions.
- Maintain a safety net of behavioral tests on critical paths.
- Add targeted unit tests where logic is dense or risky.
This converts "we’re scared to touch it" zones into evolvable systems.
6. Static Analysis + Peer Review: The Two-Layer Defense

Tools like Qodana—and its ecosystem peers—are most powerful when used as policy engines, not optional suggestions.
- Run static analysis in CI for every merge request.
- Block on high-severity issues: security flaws, obvious bugs, dangerous patterns.
- Let humans focus reviews on architecture, readability, and trade-offs, not trailing whitespace.
Automated checks turn "bad code" from an opinion into a failing condition you can see on a dashboard.
7. Style Guides, Governance, and Traceability
Style guides are only as real as their enforcement.
- Adopt a formatter (Prettier, Black, ktlint, etc.) and make it non-negotiable.
- Use Qodana or similar tools to enforce language- and repo-wide conventions.
- Document architectural decisions (ADR-style) so future engineers understand why things look the way they do.
Grey areas breed bad code; governance reduces the grey.
8. Security as a Built-In Constraint
Make it culturally explicit: insecure code is rejected code.
- Validate and sanitize all inputs; encode outputs.
- Manage secrets via vaults or environment-based secret tooling.
- Use static analysis tuned for security rules relevant to your stack and industry.
If your AI workflows generate code, run those outputs through the same gates—or stricter ones.
9. Incremental Refactoring Instead of Fantasy Rewrites
The Qodana guidance wisely pushes incrementalism over big-bang rewrites.
- Budget continuous refactoring into regular sprints.
- Prioritize debt where risk, frequency, and impact intersect: slow queries, brittle services, security hot spots.
Rewrites feel cathartic; disciplined, observable cleanup actually ships.
When Teams Stop Saying “This Is Bad” and Start Saying “This Violates X”
The most important shift Qodana advocates—and one mature teams embrace—is linguistic: move from subjective labels to explicit contracts.
Instead of:
- "This is bad code."
Say:
- "This endpoint couples three responsibilities; it violates our SRP guideline and is untestable."
- "This loop issues a query per item; it breaks our performance budget and will fall over at projected traffic."
- "This input path is unsanitized; it violates our security policy and is a potential injection vector."
Once "bad code" is defined by reference to agreed rules, quality stops being an argument and becomes an engineering process.
And this is where platforms like Qodana matter beyond vendor marketing. By encoding conventions, security rules, and architectural constraints into automated checks, they help teams:
- Catch issues early—before they metastasize into production failures.
- Align multiple squads and languages on one quality model.
- Give leads and execs visibility into risk, debt, and trends.
In an era of AI-generated code, rapidly shifting stacks, and intense compliance pressure, relying on vibes and hero reviewers is no longer defensible.
Defining what “bad” means for your organization—and wiring that definition into your tooling—is how modern teams protect their roadmap, their talent, and their users.
Source: Adapted and expanded from JetBrains Qodana Blog, “What Developers Really Mean by ‘Bad Code’” (Nov 2025). Original: https://blog.jetbrains.com/qodana/2025/11/what-is-bad-code/