Zig's Package Management Challenge: The M×N Supply Chain Problem
#Regulation

Zig's Package Management Challenge: The M×N Supply Chain Problem

Tech Essays Reporter
8 min read

Zig's new package manager faces the same ecosystem integration challenges that every new language encounters, revealing a deeper need for standardized protocols in software supply chains.

Zig shipped a built-in package manager in version 0.11 in August 2023. It uses build.zig.zon files for manifests and fetches dependencies directly from URLs, usually tarballs on GitHub. There's no central registry yet, though the community runs unofficial indexes like zpm and aquila. The package manager works. You can declare dependencies, fetch them, build against them. That's the easy part. The hard part is everything else: the ecosystem of tools, services, and infrastructure that makes a package manager usable in production.

Look at the package management landscape. Dozens of categories, hundreds of tools. For Zig to have the same tooling support as npm or Cargo, each of those tools either needs to add Zig support, or the Zig community needs to build alternatives. That's a lot of work.

What the community has to build

Some things only the Zig community can do. Nobody else will write the build.zig.zon parser. Nobody else knows the resolution semantics. These are the parts that require language expertise:

Manifest and lockfile parsing. Tools like bibliothecary, syft, and osv-scalibr parse dependency files across ecosystems. Each needs a Zig parser added. Right now, none of them support build.zig.zon.

Vulnerability scanning. pip-audit, bundler-audit, and cargo-audit are language-specific tools that check dependencies against advisory databases. Zig needs a zig-audit equivalent, plus an advisory database to check against.

SBOM generation. cdxgen and syft generate SBOMs from project files. They need to understand Zig's dependency format to include Zig packages in the bill of materials.

Dependency tree visualization. Cargo has cargo tree, npm has npm ls. Zig needs something equivalent to show the resolved dependency graph.

Registry software. If Zig wants a central registry, someone has to build and run it. Crates.io, RubyGems.org, PyPI all required significant engineering effort. The unofficial indexes exist but aren't authoritative.

PURL and VERS types. The Package URL spec and version range spec are standards, but they're essentially maps of existing ecosystems rather than higher-order abstractions. Each new package manager has to propose a type, document its semantics, and get the PR merged. Zig has an open proposal that's been pending since 2023. Without a PURL type, Zig packages can't be referenced in SBOMs, advisory databases, or cross-ecosystem tooling in a standardized way.

What vendors need to care about

Other integrations require buy-in from companies who may not care about Zig yet. Market share matters here. If you're a SaaS vendor prioritizing what to support next, Zig is competing against languages with larger user bases. Even if the Zig community does everything right, they're still waiting on Dependabot, Renovate, and Snyk to care.

Dependency update tools. Dependabot supports a fixed set of ecosystems. Adding a new one requires GitHub engineering time. Renovate is more extensible but still needs a manager plugin. Neither supports Zig today. There's a Dependabot issue and a Renovate discussion, both from 2023, both stalled.

Vulnerability databases. The GitHub Advisory Database and OSV need advisories filed against Zig packages using Zig's identifier scheme. That requires agreeing on how to identify Zig packages (PURL has a zig type proposal but it's not merged).

SCA tools. Snyk, Socket, Sonatype, and others would need to add Zig support. Each vendor makes independent decisions about what's worth supporting.

Enterprise artifact repositories. JFrog Artifactory and Sonatype Nexus support proxying and hosting packages for many ecosystems. Zig isn't on the list.

Metadata platforms. deps.dev, Libraries.io, and ecosyste.ms aggregate package data across ecosystems. Each needs to understand Zig's package format and index Zig packages from wherever they're published.

Forge integrations. GitHub's dependency graph, GitLab's dependency scanning, and Gitea's security features all need to parse Zig manifests to show Zig dependencies in their UIs.

What else needs updating

SBOM formats. CycloneDX and SPDX have ecosystem-specific guidance. Zig needs representation in both.

Trusted publishing. PyPI's Trusted Publishers and npm's provenance rely on Sigstore and registry-specific OIDC flows. If Zig gets a central registry, it needs this infrastructure too.

How this usually goes

The typical path looks like this:

  1. Package manager ships with the language
  2. Early adopters manage dependencies manually
  3. Community builds minimal tooling (a parser here, an index there)
  4. Language gains traction, vendors start noticing
  5. Major tools add support one by one, in no particular order
  6. Eventually, enough coverage exists that the ecosystem feels complete

This process takes years. Go modules shipped in 2018 and still lacks full tooling parity with older ecosystems. Rust has been around since 2015 and Cargo is well-supported now, but that's a decade of incremental integration.

Somewhere along the way, package manager designers realize that some of their early decisions make integration harder. Maybe they didn't assign unique identifiers to packages. Maybe their version scheme doesn't map cleanly to PURL. Maybe they fetch dependencies from URLs instead of a registry, which breaks assumptions baked into every SBOM tool. By then, users depend on the current behavior. Changing a package manager after launch is like changing the hull of a submarine while it's searching for the Titanic.

Each new package manager goes through the same loop. Each tool vendor reimplements the same patterns: parse a manifest, extract dependencies, check against advisories. The work is duplicated dozens of times across the ecosystem, with each implementation making slightly different decisions about edge cases.

Beyond the engineering, there's human coordination. Shepherding PRs through repos maintained by volunteers with different priorities. Getting PURL proposals reviewed by a committee that meets sporadically. Convincing SCA vendors to prioritize your ecosystem over the next one in line. Zig's PURL proposal has been open since 2023. That's not a technical problem. It's part of why package management is a wicked problem: too many stakeholders, no single authority, solutions that create new problems.

What would make this easier

Package management is in its pre-LSP era. Before the Language Server Protocol, every IDE had to implement support for every language: M editors × N languages = M×N integrations. LSP changed that to M+N. Each editor implements the protocol once, each language implements a server once, and they all work together.

Package management has the same M×N problem. Every tool (Dependabot, Snyk, Syft, deps.dev) implements support for every ecosystem (npm, PyPI, Cargo, Go) separately. Each integration is custom. When Zig arrives, it goes to the back of every queue.

Every codebase is a dependency graph. The syntax varies, the resolution algorithms differ, the registries have different APIs, but the structure is the same: nodes are packages, edges are version constraints, and the goal is a consistent set of concrete versions. Zig's graph looks like Cargo's graph looks like npm's graph, once you strip away the surface differences.

We need a Dependency Lifecycle Protocol (DLP), an LSP for the package management world. In A Protocol for Package Management, I sketched what this might look like: common definitions for manifest structure, resolution behavior, registry APIs. If it existed, a new package manager could implement against it. Tools that speak the protocol would get Zig support without each SCA vendor adding it separately.

The same problem twice

The dependency layer in digital sovereignty makes a similar point from a different angle: dependencies are a chokepoint that nation-states and institutions don't control. The Zig problem and the sovereignty problem are the same problem. One is "why can't a new language ecosystem bootstrap quickly" and the other is "why can't institutions control their own dependency infrastructure."

Both point to missing abstraction layers that would allow substitution. The lack of a protocol creates lock-in by default. Not malicious, just gravitational. If you're Zig, you need Dependabot and Snyk and GitHub's dependency graph. If you're a European institution, you need those same tools because that's where the vulnerability data lives. A protocol would make the dependency layer contestable. Run your own registry that federates with others. Stand up a regional vulnerability database that speaks the same language. Use tooling that isn't controlled by three American companies.

Governments already mandate standards for procurement: accessibility, security certifications, data residency. If US federal or EU procurement required dependency tooling that implements a common protocol, the incentive structure inverts. Government procurement is a massive market that moves in blocks. If you can't sell to governments without protocol compliance, every vendor finds budget for it overnight. Zig gets support as a side effect: if Snyk implements the protocol to keep selling to governments, Zig gets coverage by conforming to the same spec.

The Cyber Resilience Act is already pushing in this direction with SBOM requirements. PURL, OSV, and CycloneDX are attempts at standards, but they're descriptive rather than prescriptive. They document what exists rather than defining what should exist. The CRA mandates outputs without mandating the interoperability layer that would make those outputs meaningful across ecosystems.

Right now, the cost of launching a new package manager includes rebuilding the entire surrounding infrastructure. Languages stick with existing tools even when they're not a great fit, because the integration burden is too high. Zig is going through this now. Rue, a research language exploring memory safety with a gentler learning curve than Rust, doesn't have a package manager yet. When it does, it will face the same integration slog. Until the protocol layer exists, every new language will.

This is similar to how Go modules launched, fetching directly from version control hosts. Go eventually added proxy.golang.org and sum.golang.org to provide caching, checksums, and availability guarantees.

Comments

Loading comments...