![Main article image](


alt="Article illustration 1"
loading="lazy">

) ## A New Pattern for an AI-Native Software Stack Large language models have made it trivial to conjure thousands of lines of code on demand. They have not made it trivial to **trust** that code. The modern service stack is riddled with what MIT CSAIL’s Daniel Jackson calls *feature fragmentation*: a single feature’s logic—sharing, liking, authentication, billing—smeared across microservices, handlers, middleware, and databases. This scattered intent is hard for humans to trace and even harder for AI systems to synthesize safely. When your "share" button is implemented in four services and three pipelines, there is no single place where "what sharing means" actually lives. In work presented at the SPLASH conference in Singapore, Jackson and PhD student Eagon Meng propose a structural answer: design software in terms of **concepts** and **synchronizations**. It’s not just another modularity slogan; it’s an attempt to make software *legible by construction*—to humans and to LLMs. Their core claim is deceptively simple: *what you see in the architecture should be what the system does*. --- ## Concepts: Where Behavior Finally Lives in One Place In this model, a **concept** is a first-class unit of functionality that aligns with how humans talk about systems: *SharePost*, *Like*, *FollowUser*, *Comment*, *Inventory*, *Order*, *Payment*, and so on. Each concept encapsulates: - Its state - Its operations (actions it can perform or expose) - Its invariants and semantics Critically, a concept is meant to be **coherent and localized**. If you want to know how sharing works, you read the *Share* concept. Not three services, two controllers, one ORM, and a saga. Where traditional microservices gave us deployment boundaries, they rarely solved semantic drift: the same feature slices through multiple services. Concepts are an attempt to restore semantic locality without throwing away distributed architectures. --- ## Synchronizations: Making the Glue Explicit If concepts are the clean building blocks, **synchronizations** are the contracts that describe how they interact. Meng and Jackson introduce a small domain-specific language (DSL) to specify synchronizations declaratively. Instead of encoding integration logic as ad hoc imperative glue—callbacks, listeners, chained APIs—the synchronization DSL states explicit rules, such as: > When action A in concept X occurs under condition P, invoke action B in concept Y and keep state S1 and S2 consistent. At a high level, synchronizations: - Declare how concepts compose - Define propagation of events and state - Capture cross-cutting concerns (e.g., error handling, response shaping, persistence policies) in one place Because these rules are **explicit, structured, and small in vocabulary**, they’re well-suited for static analysis and, crucially, for **LLM generation and verification**. Instead of asking an LLM to infer hidden wiring patterns buried in code, we ask it to read (or emit) a synchronization spec that’s designed to be unambiguous. --- ## Why This Matters in the Age of Code-Generating LLMs The timing of this proposal is not incidental. LLMs are powerful at token-level synthesis but weak at reliably reconstructing **global intent** from tangled, non-local architectures. The result is familiar to anyone experimenting with AI-generated patches in large systems: plausible, occasionally correct, and dangerously unaware of side effects. Concepts and synchronizations directly attack the three pain points that cripple AI-assisted engineering: 1. **Hidden semantics** Today, the meaning of "share" or "refund" is diffused. In the proposed model, semantic centers are explicit: concepts define meaning; synchronizations define interactions. 2. **Uncheckable integration logic** Traditional integration code is imperative and scattered, which makes both formal verification and automated reasoning brittle. A constrained DSL for synchronizations is fertile ground for model checking, consistency checks, and automated test generation. 3. **Specification vacuum** LLMs need structure to align code with intent. As Thomas Ball notes, this work gives us a modular, machine-readable way of saying what we want. You can imagine a pipeline where product requirements map to concepts, and synchronizations become the primary artifact AI tools operate on. In effect, the framework reframes AI coding from "generate a bunch of opaque code" to "generate and validate explicit contracts between well-defined concepts." --- ## A Case Study: From Fragmented Features to Legible Behavior To test the pattern, the MIT team modeled core social features—liking, commenting, sharing—each as standalone concepts, then used synchronizations to orchestrate behavior across them. Without the pattern: - Each feature’s logic sprawled across multiple services - Common concerns (error handling, response formatting, persistence) were duplicated and inconsistent - Understanding a single feature required spelunking through the whole stack With concepts and synchronizations: - Each feature was centralized in a concept: you could open one unit and understand its full behavior contract - Synchronizations defined when actions in one feature triggered behavior in another - Cross-cutting concerns were factored into shared synchronizations instead of leaking into every codepath This is more than tidy design. It creates a **visually and logically inspectable map** of how everything fits together—exactly the substrate architectural tools, verifiers, and LLMs need. --- ## Distributed Systems, Human Semantics, and Weaker Consistency The pattern scales beyond toy web apps. Because synchronizations are explicitly about coordination, they become natural tools for: - Coordinating replicas in distributed systems - Managing interactions across shared databases - Expressing consistency levels directly in the architecture Notably, the framework allows for *weakened synchronization semantics*—for instance, eventual consistency expressed declaratively—without losing legibility. Instead of burying "eventual" behavior in retry loops and message queues, the architecture can say: these concepts are loosely synchronized; these others are strict. That’s catnip for both verification tools and SREs. Kevin Sullivan’s reading of the work is blunt: most of our software is built on abstractions convenient to machines, not to humans, and that misalignment has real-world consequences. Concepts invert that: they start from human-understandable units of purpose, then tie them down with formal, analyzable structure.
<img src="https://news.lavx.hu/api/uploads/mits-concepts-and-synchronizations-could-rewrite-how-we-architect-ai-generated-software_20251113_081720_image.jpg" 
     alt="Article illustration 2" 
     loading="lazy">

Toward Concept Catalogs and AI-Native Architectures

The most intriguing implication is cultural.

Jackson imagines "concept catalogs": shared, vetted libraries of domain concepts—ShoppingCart, AccessControl, AuditTrail, Notification, RateLimit—with clearly defined semantics. System design would look less like wiring bespoke glue and more like:

  1. Select well-understood concepts from a catalog
  2. Write synchronizations that describe how they interact for your product
  3. Let tools (including LLMs) generate the underlying implementation and checks

For industry, that suggests a few concrete possibilities:

  • SaaS vendors and platforms expose their capabilities as concepts with published synchronizations, rather than opaque SDKs plus tribal integration lore.
  • Internal platforms at large orgs ship "golden" domain concepts as the default building blocks, with security, compliance, and observability baked into synchronizations.
  • AI coding tools evolve from autocomplete engines into architectural co-designers, operating over explicit semantic structures.

None of this eliminates complexity. But it moves complexity into a space where it is named, visible, and open to formal scrutiny.


When Software Reads Like It Means It

The work behind “What You See Is What It Does: A Structural Pattern for Legible Software” doesn’t hand us a finished framework, and the path from research DSL to production ecosystem is nontrivial. It will require robust tooling, language integrations, and the usual hardening against real-world messiness.

But the direction is hard to ignore. If AI is going to write and maintain a growing fraction of our systems, we cannot afford architectures that hide intent behind scattered glue and incidental structure.

Concepts and synchronizations offer a crisp alternative: design software so that its high-level story is explicit, composable, and enforceable. For developers, that’s a chance to get back something we’ve slowly lost in the microservice era—a place in the system where you can point and say: this is what it does.


Source: Based on research and reporting from MIT CSAIL, “MIT researchers propose new model for legible, modular software” (November 6, 2025), and the paper “What You See Is What It Does: A Structural Pattern for Legible Software.”