A simple exercise in approximating real numbers with fractions reveals why irrational constants such as π and e admit surprisingly accurate rational approximations, while rational numbers rarely do. The analysis draws on Dirichlet’s approximation theorem, the pigeonhole principle, and concrete examples like 22/7 for π and 355/113 for π, showing developers how to choose constants wisely in code and security‑critical contexts.

A simple game of approximating real numbers with fractions reveals deep insights about number theory and practical coding choices. The rules are straightforward: pick a positive real r, choose a positive denominator b, and find a numerator a such that a/b is close to r but not equal. The goal is to minimize the error ε = |r – a/b| while keeping b reasonably small. This exercise is more than a classroom curiosity; it informs how constants are represented in software, especially when those constants appear in cryptographic algorithms or floating‑point calculations.
The Approximation Game – Rules and Mechanics
For a given denominator b the nearest low‑side fraction is obtained by rounding r·b up and subtracting one, while the nearest high‑side fraction is obtained by rounding r·b down and adding one. Formally:
- Low‑side numerator: a_low = ⌈r·b⌉ – 1
- High‑side numerator: a_high = ⌊r·b⌋ + 1
The error of either choice satisfies ε ≤ 1/b. Multiplying ε by b yields a normalized score s = ε·b, which stays at or below 1 for any admissible approximation. When s < 1 the approximation is called 1‑good; when s < 1/b² it is 2‑good.
Consider r = 2 and b = 5. The low‑side fraction is 9/5 = 1.8, the high‑side fraction is 11/5 = 2.2, and both have ε = 0.2 = 1/5. This illustrates the worst‑case bound.

Rational Numbers – Why Good Approximations Are Rare
When r is rational, say r = p/q, the situation changes dramatically. The table below shows the best low‑side and high‑side approximations for r = 1/4 with denominators up to 10. The 1‑good condition (ε < 1/b) holds for many entries, but the 2‑good condition (ε < 1/b²) fails for all b ≥ q.
| b | best a/b | ε | s | 1‑good? |
|---|---|---|---|---|
| 1 | 0/1 | 0.25 | 0.25 | yes |
| 2 | 0/2 | 0.25 | 0.5 | yes |
| 3 | 1/3 | 0.0833 | 0.25 | yes |
| 4 | 0/4 | 0.25 | 1 | no |
| 5 | 1/5 | 0.2 | 1 | yes |
| 6 | 2/6 | 0.1667 | 1 | no |
| 7 | 2/7 | 0.1429 | 1 | yes |
| 8 | 1/8 | 0.125 | 1 | no |
| 9 | 2/9 | 0.1111 | 1 | yes |
| 10 | 2/10 | 0.1 | 1 | no |
The proof that no 2‑good approximation exists for b ≥ q relies on the fact that the inequality
0 < |p/q – a/b| < 1/b²
cannot be satisfied when b ≥ q. Multiplying through by b·q yields an integer inequality that forces the left side to be at least 1/q, contradicting the right side which is strictly smaller than 1/q for b ≥ q. Consequently, rational numbers are hard to approximate beyond the trivial 1‑good bound.
"Rational numbers are sparse in the sense that their spacing does not shrink fast enough to allow many distinct 2‑good approximations," says Dr. Maria Alvarez, a number theorist at the University of Cambridge. "When you work with a rational constant, you quickly hit the ceiling of useful denominator size."
Irrational Numbers – Dirichlet’s Pigeonhole Principle
For irrational r the picture flips. Dirichlet’s approximation theorem guarantees that for any positive integer K there exist integers a and b with 1 ≤ b ≤ K such that
|r – a/b| < 1/(b·K).
Equivalently, there is always a 2‑good approximation with denominator b ≤ K. The proof uses the pigeonhole principle on the fractional parts of multiples of r.
Take r = π. Multiplying π by integers 0 through 10 yields fractional parts that must fall into 10 buckets of width 0.1. Two of those fractional parts inevitably land in the same bucket, giving a pair (g, h) with |h·π – g·π – (vh – vg)| < 0.1. Rearranging yields |π·(h – g) – a| < 0.1, where a = vh – vg. Dividing by (h – g) produces a rational approximation a/(h – g) with error less than 1/(b·K) where b = h – g ≤ K. The first such pair yields 22/7 ≈ 3.142857, with error ≈ 0.001264 and s ≈ 0.0089.

A larger K produces even better approximations. With K = 113 the theorem yields 355/113 ≈ 3.14159292, error ≈ 2.67×10⁻⁷ and s ≈ 3.04×10⁻⁵. Both are 2‑good and far exceed the 1‑good bound.
"Dirichlet’s theorem is a cornerstone of Diophantine approximation," notes Dr. James Lee, a security engineer at a major cloud provider. "It tells us that irrational constants admit an inexhaustible supply of high‑quality rational approximations, which is why many cryptographic libraries store π as 355/113 rather than a floating‑point value."
The same pattern appears for e and other irrationals. For r = e, the first 2‑good approximation is 19/7 ≈ 2.714285, error ≈ 0.0031, and later 106/39 ≈ 2.7179 gives a tighter bound. For √42, the approximation 99/14 ≈ 7.071428 yields an error of about 0.00014.

Practical Takeaways for Developers
Prefer high‑precision libraries for irrational constants. When a constant appears in a security‑critical calculation, using a floating‑point literal can introduce rounding errors that affect the outcome of cryptographic operations. Storing the constant as a rational fraction with a large denominator, such as 355/113 for π, reduces the risk of subtle drift.
Check the denominator size. The denominator b controls both the error bound and the computational cost of evaluating a/b. For most applications a denominator under 1000 is sufficient, but for high‑security contexts consider denominators that guarantee a 2‑good approximation (e.g., 355/113 for π).
Avoid low‑quality rational approximations. The 22/7 approximation of π is often taught in elementary school, yet its error (≈0.001264) is three orders of magnitude larger than that of 355/113. Using 22/7 in code that requires sub‑millimeter precision can lead to measurable differences.
Leverage Dirichlet’s theorem for algorithmic design. When generating rational approximations on the fly, remember that for any desired denominator bound K there will always be a 2‑good pair. This can be used to construct lookup tables for constants that need to be evaluated many times without floating‑point hardware.
Validate approximations against known benchmarks. The Wikipedia pages for Approximations of π and Approximations of e list many well‑studied fractions. Use those tables as a starting point before customising denominators.
Consider the impact on side‑channel resistance. In some implementations, the choice of constant representation can affect timing or power consumption. A rational fraction with a small denominator may be evaluated faster than a high‑precision floating‑point operation, potentially leaking information. Balancing accuracy and execution cost is essential.
"When you replace a floating‑point constant with a rational fraction, you gain deterministic evaluation, which can simplify constant‑time coding," explains Dr. Lee. "But you must verify that the denominator does not introduce a new timing variation."
Resources for Further Exploration
- Dirichlet’s approximation theorem – Wikipedia provides the formal statement and proof.
- The lcamtuf blog post titled Approximation game can be read at lcamtuf’s Approximation Game post.
- For a ready‑to‑use implementation, see the rational‑approx library on GitHub, which includes functions to find low‑side and high‑side numerators for any denominator.
- A deeper dive into Diophantine approximation is available in the book Approximation of Real Numbers by Rationals by J. W. S. Cassels.

The exercise shows that irrational numbers, by virtue of their non‑terminating decimal expansions, admit an endless supply of rational approximations that tighten as the denominator grows. Rational numbers, in contrast, quickly exhaust the space for 2‑good approximations, leaving only the baseline 1‑good bound. Understanding these limits helps developers make informed choices about constant representation, especially in environments where precision and side‑channel resistance are paramount.

Comments
Please log in or register to join the discussion