Vercel has open-sourced json-render, a framework that enables AI models to generate structured user interfaces from natural language prompts using a catalog-based approach.
Vercel has released json-render, an open-source framework that enables AI models to generate structured user interfaces from natural language prompts. The framework, which Vercel describes as "Generative UI," allows developers to define a catalog of permitted components and actions using Zod schemas, while an LLM generates a JSON specification constrained to that catalog. The project has already accumulated over 13,000 stars on GitHub since its January 2026 launch and is available under the Apache 2.0 license.
The core innovation behind json-render is its approach to bridging natural language and UI generation. Rather than having AI models generate arbitrary code, developers define what components and actions are available through Zod schemas. The AI then generates a flat JSON tree of typed elements referencing only catalog entries, which the Renderer component maps to real implementations. This constraint-based approach provides both security and predictability, preventing the generation of malicious code while ensuring the output remains within defined boundaries.
Vercel CEO Guillermo Rauch described the technology as "plugging the AI directly into the rendering layer" and called it "a very disruptive technology." The framework ships with 36 pre-built shadcn/ui components for teams that want a head start, alongside packages for PDF generation, HTML email, video via Remotion, OG image rendering, and 3D scenes through React Three Fiber. Renderers are currently available for React, Vue, Svelte, Solid, React Native, and other frameworks.
Community reaction has been notably mixed. On Hacker News, one user drew parallels to 4GLs from the late 1990s, noting that "giving an AI accessible structure to this gets AI into the realm of the various 4GLs back in late 90s which made user created forms so much easier." Another developer reported success with building text-to-dashboard applications, finding it "more robust than when I tried the exact same thing with structured outputs API and GPT-4 era models."
However, some developers questioned why Vercel would "reinvent it as a new system" when existing standards like OpenAPI and JSON Schema already describe data structures. A respondent clarified the distinction: "OpenAPI, JsonSchema, GraphQL all describe Data. This describes User Interfaces. The closest alternative would be to just return React JS code directly. But this adds a layer of constraint and control, preventing LLMs to generate e.g. malicious React JS code."
On Reddit, developers observed that "the shift is real but the role change you're describing is already happening in pockets. We've been moving toward constraint-based systems for years, design tokens, component libraries, storybook configs. This just pushes that boundary further into runtime composition instead of build-time authoring."
Google has introduced a comparable project called A2UI (Agent-to-User Interface), which was quietly released in late 2025. A comparison on Medium noted that while the two share the same high-level pipeline of AI to JSON to component catalog to UI, they solve different problems: json-render is described as a "tool" tightly coupled to a specific application's component set, while A2UI positions itself as a "protocol" for cross-agent interoperability.
The framework is written in TypeScript and uses a monorepo structure managed with pnpm. It is available via npm under the @json-render scope. The documentation includes a quick start guide, a playground, and examples for each supported renderer. For developers interested in exploring the technology, the json-render GitHub repository provides comprehensive resources and examples.
This release represents a significant step in the evolution of AI-assisted development tools, offering a structured approach to UI generation that balances the creative potential of AI with the practical constraints needed for production applications. As the technology matures, it may fundamentally change how developers think about interface composition and the role of AI in the development process.

Comments
Please log in or register to join the discussion