Fixed-Point Arithmetic: The High-Performance Alternative to Floating-Point for Resource-Constrained Systems
Share this article
The Hidden Engine of Efficiency: Why Fixed-Point Matters
In resource-constrained environments like embedded systems or real-time graphics, floating-point operations can bottleneck performance due to their CPU-intensive nature. Fixed-point arithmetic solves this by repurposing integer hardware to handle fractional values, offering a compelling trade-off: reduced numerical range for higher computational speed and deterministic behavior. By strategically placing a "virtual" decimal point within a standard integer bitfield, developers gain fine-grained resolution without the overhead of floating-point units—crucial for applications from drone control to audio processing.
How Fixed-Point Works: Bits, Decimals, and Trade-Offs
At its core, fixed-point reimagines a signed 32-bit integer. While a standard int places its decimal point below the least significant bit (LSB), limiting values to whole numbers, fixed-point shifts this decimal. For example, positioning it between bits 14 and 15 creates a 16.15 format: 16 bits for the integer part and 15 for fractions. This reduces the range (to ±$2^{16}$) but boosts resolution to $2^{-15}$. The CPU still processes it as an integer, but logical reinterpretation unlocks fractional precision.
Core Operations: When Arithmetic Just Works (and When It Doesn't)
Addition/Subtraction: Seamlessly use integer hardware, as operations align with the decimal point:
fix a = int2fix(10); // Conversion macro fix b = int2fix(5); fix result = a + b; // Correctly yields 15 in fixed-pointMultiplication: Requires care due to expanded bit-width. Multiplying two 16.15 values produces a 64-bit intermediate with 34 integer and 30 fractional bits. To fit back into 32 bits, shift right by 15 bits, discarding overflow/underflow:
#define fix_mult(a, b) ((fix)(((int64_t)a * (int64_t)b) >> 15))Why it matters: This avoids floating-point’s latency but demands range checks to prevent overflow—highlighted in applications like physics simulations.
Division: Avoid if possible. It’s slow and complex in fixed-point; prefer multiplication by reciprocals.
Conversions and Special Functions: Navigating the Pitfalls
Integer ↔ Fixed-Point: Shift bits to move the decimal. For 16.15 format:
#define int2fix(x) ((fix)(x << 15)) // Int to fix #define fix2int(x) ((int)(x >> 15)) // Fix to intFloat ↔ Fixed-Point: Use "dimensional analysis," but sparingly due to float overhead:
#define float2fix(x) ((fix)(x * 32768.0)) // 1 float unit = 2^15 fix units #define fix2float(x) ((float)(x / 32768.0))Best practice: Limit these conversions to initialization; use pure fixed-point in loops.
Square Root: Surprisingly, converting to float, computing sqrt, and converting back is often fastest:
fix fix_sqrt(fix x) { return float2fix(sqrt(fix2float(x))); }This leverages floating-point’s dedicated exponent handling. For ultra-optimized cases, explore bit-twiddling tricks like the Fast Inverse Square Root.
Random Numbers: Generate values in specific ranges using bit-shifts:
fix rand_range_0_to_1() { return (fix)(rand() >> 16); } fix rand_range_neg1_to_1() { return (fix)((rand() >> 15) - 32768); }
Real-World Impact: Speedups and Compiler Support
The RP2040 microcontroller demonstrates fixed-point’s power: when computing the Mandelbrot set, one core using fixed-point completed its half 5.3x faster than the other using floating-point. For developers, the stdfix.h library (e.g., _Accum type) simplifies adoption by handling arithmetic transparently, though custom implementations offer finer control.
Embracing fixed-point isn’t just about squeezing cycles—it’s about enabling complex applications on minimal hardware, from edge AI to game physics, where every microsecond counts.
Source: vanhunteradams.com