#Backend

SBCL Fibers: Lightweight Cooperative Threads for Common Lisp

Trends Reporter
5 min read

SBCL introduces fibers - lightweight userland cooperative threads that enable high-concurrency applications with minimal resource overhead. This implementation preserves the sequential programming model while achieving the efficiency of event-driven I/O, handling tens of thousands of concurrent connections with just a fraction of the memory required by OS threads.

The Common Lisp community has been buzzing about a significant development in SBCL's concurrency model: the introduction of fibers. These lightweight cooperative threads represent a fundamental shift in how high-concurrency applications can be written in Common Lisp, offering a compelling alternative to both OS threads and traditional event-driven programming.

The Fiber Concept Explained

Fibers are essentially user-space threads with their own control stack and binding stack, scheduled cooperatively by a library-level scheduler rather than the kernel. This approach addresses a critical problem in server applications: the tension between the natural programming model (one thread of control per connection) and the resource constraints of OS threads.

As the documentation explains, "Each OS thread in SBCL carries a full-sized control stack (typically 8 MB), a binding stack, signal handling infrastructure, and a kernel task_struct. Creating a thread requires mmap, clone, and TLS setup; destroying one requires the reverse." At scale, this becomes prohibitively expensive - 10,000 concurrent connections would require 80 GB of virtual address space for stacks alone.

Fibers solve this by using much smaller stacks (256 KB by default) and context switching that saves and restores just six registers in user space. This enables thousands of fibers to share a small pool of OS carrier threads, dramatically reducing resource consumption while preserving the sequential programming model.

Technical Implementation Highlights

The SBCL fiber implementation is notable for its comprehensive approach to handling the complexities of Common Lisp's runtime environment:

  1. Full State Preservation: Unlike simpler fiber implementations, SBCL fibers correctly handle dynamic variable bindings, catch blocks, and unwind-protect chains. This ensures that code written for threads works unchanged in fibers.

  2. Zero-Allocation Context Switching: The context switch path is carefully optimized to avoid heap allocation, using raw machine words instead of Lisp objects to prevent GC pressure during switches.

  3. Work-Stealing Scheduling: The implementation uses Chase-Lev work-stealing deques to distribute work across multiple carrier threads without centralized scheduling or contention.

  4. Efficient I/O Multiplexing: On Linux, fibers use edge-triggered epoll with one-shot mode to eliminate spurious wakeups and avoid the thundering-herd problem.

  5. Memory Efficiency: Stack pooling with madvise(MADV_DONTNEED) recycling minimizes the cost of fiber creation and destruction.

Performance Implications

Benchmark results demonstrate the significant advantages of fibers for high-concurrency workloads:

  • At 10,000 concurrent connections, fibers deliver 102,710 requests per second compared to 55,493 for threads - an 85% improvement
  • Memory usage is dramatically reduced: 10,000 fibers require approximately 2.5 GB of virtual address space versus 80 GB for OS threads
  • Context switching is sub-microsecond (0.48 μs per switch), though still slower than kernel thread switches (150-250 ns)

These numbers suggest that fibers could enable Common Lisp web servers to compete effectively with languages known for high concurrency like Go, Erlang, and Node.js.

Integration with Existing Code

One of the most compelling aspects of SBCL's fiber implementation is its transparent integration with existing code. The documentation states: "Existing SBCL code should work inside fibers without modification. grab-mutex, condition-wait, wait-until-fd-usable, sleep, and wait-for all detect when they are running inside a fiber and yield cooperatively instead of blocking the carrier thread."

This is achieved through careful patching of key functions in SBCL's runtime. For example, sb-sys:wait-until-fd-usable checks for fiber context and dispatches to a fiber-aware implementation when appropriate. This means that libraries like Hunchentoot can adopt fibers with minimal changes - primarily just replacing the thread-per-connection taskmaster with a fiber-based one.

Community Reaction and Adoption

The fiber implementation has been met with enthusiasm in the Common Lisp community. Developers have noted that this feature addresses long-standing concerns about Lisp's suitability for high-concurrency applications. The fact that it's being implemented directly in the SBCL core - rather than as an external library - suggests it's positioned as a first-class concurrency primitive.

However, there are some caveats and considerations:

  1. Stack Size Limitations: Unlike some other implementations, SBCL fibers don't support dynamic stack growth. This means developers must carefully choose appropriate stack sizes and may need to restructure deeply recursive code.

  2. Pinning for Thread-Affine Operations: When interacting with thread-local foreign state (like OpenSSL contexts), fibers must be pinned to prevent migration between carrier threads. This can reduce the benefits of work stealing if overused.

  3. Debugging Complexity: While the implementation includes introspection capabilities like print-fiber-backtrace, debugging fiber-based applications presents unique challenges compared to traditional threaded code.

Platform Support

The implementation is impressively broad in its platform support, with full support for:

  • x86-64 (Linux, macOS, Windows)
  • ARM64 and ARM32
  • PPC64 and PPC32
  • RISC-V (RV64)

Each architecture has its own optimized assembly implementation, and the I/O multiplexing adapts to platform capabilities (epoll on Linux, kqueue on BSD, poll as a fallback).

The Road Ahead

The fiber implementation is still marked as work-in-progress, with the documentation noting that "details may change." This suggests that while the core functionality is solid, there may be refinements ahead.

Potential future improvements could include:

  • More sophisticated scheduling for mutex and condition variable waits
  • Dynamic stack resizing
  • Enhanced debugging and introspection tools

Conclusion

SBCL fibers represent a significant advancement for Common Lisp in the concurrency space. By providing a lightweight threading model that preserves the sequential programming model while achieving high scalability, they position Common Lisp as a strong contender for high-performance, concurrent applications.

The implementation demonstrates deep understanding of both the theoretical challenges of cooperative multitasking and the practical realities of Common Lisp's runtime environment. For developers building network services in Common Lisp, fibers offer a compelling alternative to both traditional threads and complex event-driven architectures.

The availability of this feature in SBCL - one of the most widely used Common Lisp implementations - suggests that we may see a new wave of high-performance Common Lisp applications in the coming years, particularly in areas like web services, real-time systems, and data processing pipelines where efficient concurrency is critical.

For developers interested in exploring fibers, the implementation is available in the SBCL fibers GitHub repository. The official documentation provides comprehensive details on the API and implementation.

Comments

Loading comments...