The Pragmatic Path: Shipping Rust in Safety-Critical Environments
#Rust

The Pragmatic Path: Shipping Rust in Safety-Critical Environments

Tech Essays Reporter
7 min read

Rust is already running in production medical devices and industrial robots, but the journey from prototype to certified system reveals a critical tension: the language's compiler-enforced safety maps perfectly to functional safety requirements, yet the surrounding ecosystem for high-integrity development is still maturing. Teams navigate a landscape where dependency management, toolchain stability, and async runtime qualification become as important as the code itself.

Featured image

The path to shipping Rust in safety-critical systems isn't a question of compiler capability, but of ecosystem maturity. In a series of interviews with engineers across automotive, aerospace, industrial, and medical domains, a consistent pattern emerged: Rust's memory safety and thread safety guarantees align remarkably well with what functional safety standards demand, yet the practical infrastructure for high-integrity development remains fragmented. This isn't theoretical—Rust is already deployed in production medical devices monitoring ICU patients and in mobile robotics systems certified to IEC 61508 SIL 2.

The fundamental insight from these conversations centers on a single tension. Rust's compiler does work that safety engineers traditionally performed through process, manual review, and external tools. "Roughly 90% of what we used to check with external tools is built into Rust's compiler," one principal firmware engineer told us. This shift from process-based enforcement to language-level guarantees changes the economics of safety-critical development, particularly for teams managing 15-to-20-year product lifetimes with "teams of teams." Yet once you move beyond prototyping into higher-criticality components—what automotive calls ASIL B through ASIL D—the ecosystem support thins out rapidly.

The Criticality Gradient

Safety-critical standards create a ladder of integrity levels. In automotive, this ranges from QM (quality management) to ASIL D, with each step demanding more rigorous development processes, verification evidence, and documentation. The story at QM looks fundamentally different from ASIL D, regardless of domain.

At low criticality, teams adopt a pragmatic approach: use Rust and the crates ecosystem to move quickly, then harden what ships. "We can use any crate from crates.io," an automotive OEM architect explained. "We have to take care to prepare the software components for production usage." This acceleration matters—teams report 100-fold performance improvements when replacing Python components with Rust, and the compiler catches entire classes of bugs that would require extensive testing in C.

But the calculus changes dramatically at higher integrity levels. Third-party dependencies become difficult to justify, and teams describe three patterns: rewrite critical code from scratch, internalize dependencies by bringing them in-house, or build abstraction layers designed for future replacement. "We tend not to use third-party dependencies or nursery crates," a firmware engineer stated bluntly. "Solutions become kludgier as you get lower in the stack."

This creates a fascinating paradox. Rust's value proposition—memory safety without garbage collection—maps perfectly to what safety engineers need, but the ecosystem's reliance on rapid iteration and latest-compiler compatibility conflicts with safety-critical stability requirements. Teams pin Rust toolchains for stability, then fight dependency drift because "almost all crates are implemented for the latest versions." One engineer described the time-consuming process of downgrading dependencies to match pinned toolchains.

The Evidence Stack

In safety-critical domains, "stability" means more than API compatibility—it means being able to explain what changes and what doesn't, and demonstrating that upgrade risk has been managed. The Rust edition system was repeatedly cited as a real advantage here, particularly for incremental migration strategies common in automotive programs. "[The edition system is] golden for automotive, where incremental migration is essential," one software engineer noted.

Yet operational stability extends beyond language features. Safety-critical software often runs on long-lived platforms and RTOSs. Even when "support exists," there can be caveats. QNX 8.0 support in Rust is currently no_std only, creating friction for teams that need full OS integration. The Rust target tier policy is clear, but regulated teams still need to map "tier" to "what can I responsibly bet on for this platform and this product lifetime." One senior engineer described the unacceptable scenario: "I had experiences where all of a sudden I was upgrading the compiler and my toolchain and dependencies didn't work anymore for the Tier 3 target we're using."

In no_std environments, core becomes the spine of Rust. Teams described it as both rich enough to build real products and small enough to audit. Much of Rust's safety leverage lives there: Option and Result, slices, iterators, Cell and RefCell, atomics, MaybeUninit, Pin. But gaps remain. "Most of the math library stuff is not in core, it's in std. Sin, cosine... the workaround for now has been the libm crate. It'd be nice if it was in core," one engineer explained.

The Async Question

Rust's async story presents both opportunity and uncertainty. Some safety-critical-adjacent systems are already heavily asynchronous—daemons, middleware frameworks, event-driven architectures. In automotive, many daemons in the AUTOSAR Adaptive Platform follow reactor patterns. "A lot of our software is highly asynchronous," a team lead developing middleware told us. "[C++14] doesn't really support these concepts, so some of this is lack of familiarity."

But async in safety-critical contexts isn't just a language feature—it's a runtime choice. "If we want to make use of async Rust, of course you need some runtime which is providing this with all the quality artifacts and process artifacts for ISO 26262," the same team lead noted. The question of certifying or qualifying async runtimes, scheduling assumptions, and runtime behavior becomes central at higher integrity levels.

Work is already happening in this space. Eclipse SDV's Eclipse S-CORE project includes an Orchestrator written in Rust for their async runtime, aimed at safety-critical automotive software. But the broader question of what makes an async runtime "safety-case friendly" remains to be defined in concrete terms.

Recommendations for Ecosystem Evolution

The interviews revealed several patterns that could guide ecosystem development:

Shared ownership of requirements: The Ferrocene Language Specification (FLS) demonstrates a successful model. It started as an industry effort to create a specification suitable for safety-qualification, companies invested in the work, and it now has a sustainable home under the Rust Project. Contrast this with MC/DC coverage support, where earlier efforts stalled without sustained industry engagement. Renewed interest is now working through the Safety-Critical Rust Consortium to create a Rust Project Goal for 2026, with shared ownership of requirements and primary implementation by companies with vested interest.

Ecosystem-wide MSRV conventions: The dependency drift problem requires coordination between the Rust Project release team and the broader ecosystem. An LTS release scheme, combined with encouraging libraries to maintain MSRV compatibility with LTS releases, could reduce friction. The Safety-Critical Rust Consortium can help articulate requirements and adoption patterns.

Target-focused readiness checklists: The friction isn't about unclear policies but about translating "tier" into practical decisions. A consolidated checklist showing which targets exist, which are no_std only, last tested OS versions, and top blockers would lower barriers. This makes it easier for teams depending on specific targets to contribute to maintaining them.

Dependency lifecycle playbooks: Teams already follow patterns—use crates early for QM, track carefully, shrink dependencies for higher-criticality parts; for ASIL B+, avoid third-party crates entirely or use abstraction layers. Documenting these patterns as reusable playbooks would help new teams avoid trial and error.

Safety-case friendly async runtime requirements: The Safety-Critical Rust Consortium could lead efforts to define what "safety-case friendly" means in concrete terms, working with the async working group and libs team on technical feasibility and design.

Interop as part of the safety story: Most teams aren't rewriting their world in Rust—they're integrating Rust into existing C and C++ systems. "We rely very heavily on FFI compatibility between C, C++,, and Rust," an embedded systems engineer explained. "In a safety-critical space, that's where the difficulty ends up being, generating bindings, finding out what the problem was." Guidance and tooling to keep interfaces correct, auditable, and in sync would help.

Twitter image

The Path Forward

Rust is already deployed in production for safety-critical systems. The path exists. The language's defaults map directly to what functional safety engineers spend their time preventing. But ecosystem support thins out as you move toward higher-criticality software.

The question isn't whether Rust can meet safety-critical requirements—it already does in production systems. The question is how to make the ecosystem infrastructure as robust as the language itself. This requires collaboration between the Rust Project's deep technical knowledge and industry's concrete requirements, validation, and maintenance commitments.

For teams considering Rust in safety-critical contexts, the advice is pragmatic: start with QM components where you can leverage crates and move quickly. Build abstraction layers around critical boundaries. Document your dependency lifecycle patterns. Engage with the Safety-Critical Rust Consortium to share requirements and learn from others navigating the same challenges.

The goal isn't to make Rust perfect for every safety-critical scenario tomorrow, but to systematically address the gaps that prevent teams from leveraging its strengths at higher integrity levels. The foundation is solid; the ecosystem is catching up.


Get involved: If you're working in safety-critical Rust, or you want to help make it easier, check out the Rust Foundation's Safety-Critical Rust Consortium and the in-progress Safety-Critical Rust coding guidelines. Hearing concrete constraints, examples of assessor feedback, and what "evidence" actually looks like in practice is incredibly helpful.

For context on how rigor scales with cost in ISO 26262, see the Feabhas guide. For current QNX target status, see the QNX target documentation. The FLS team was created under the Rust Project in 2025 and is now actively maintaining the specification. For MC/DC context, see the tracking issue.

Comments

Loading comments...