How flawed questions to ChatGPT during physical therapy led to a novel mathematical discovery about π/4, revealing the unexpected value of 'stupid questions' in AI-assisted research.
In the landscape of human-AI collaboration, we often focus on grand achievements and breakthroughs. Yet some of the most interesting developments emerge from unexpected places—like a mathematician having conversations with ChatGPT while doing physical therapy exercises.
The author, a self-professed question-asker who admits to having annoyed landlords, car salesmen, and even colleagues with their persistent inquiries, found an unlikely partner in ChatGPT. What began as a simple probability question about coin tosses—"What is the probability that the stopping time is even?"—quickly revealed itself as fundamentally flawed. The stopping time when heads first exceed tails is always odd, making the question trivial under any interpretation.
A human interlocutor might have responded with frustration, but ChatGPT simply acknowledged the ambiguity and offered alternative interpretations. This patience, the author notes, is something no human could match. "The problem with putting [Epictetus's advice] into action has always been that, while most people want to improve, nobody wants to reveal their ignorance. How lucky we are, twenty centuries after Epictetus, that we can hide our ignorance from our fellow humans and reveal it only to our creations!"
Following up on this initial exchange, the author asked a better question: "If we stop tossing as soon as the proportion of heads becomes bigger than 1/2, what is the expected value of that proportion?" The answer ChatGPT provided was unexpectedly elegant: π/4.
This discovery reveals something fascinating about AI-assisted research. The author notes that ChatGPT not only provided the correct answer but also offered a "brisk derivation" that took the AI just under 3 minutes to produce—a proof the author could have constructed in an afternoon but not in that timeframe. The interaction demonstrates how AI can serve as both computational tool and intellectual sparring partner.
The mathematical finding itself is intriguing: when repeatedly tossing a fair coin until heads exceed tails, the expected proportion of heads at the stopping time converges to π/4. This unexpected appearance of pi in probability theory mirrors Wigner's "unreasonable effectiveness of mathematics"—how a fundamental constant of geometry emerges in statistical contexts.
The author even explored practical applications of this discovery as a method for estimating pi. While theoretically sound, the approach proves remarkably inefficient. As mathematician Matt Parker demonstrated through experimentation, obtaining even a single decimal digit of accuracy requires approximately 10,000 coin tosses. "My method of estimating pi is a really bad way to get anything better than π ≈ 3," the author admits.
Beyond the mathematical content, the article offers valuable insights into the changing nature of research and learning in the AI era:
The value of persistent inquiry: The author's willingness to ask questions—even obviously flawed ones—led to a novel mathematical result. This suggests that in AI-assisted research, the cost of asking "stupid questions" approaches zero, potentially accelerating discovery.
AI as intellectual amplifier: ChatGPT served not just as a calculator but as a patient interlocutor that tolerated ambiguous questions and offered multiple interpretations. This allowed the author to refine their thinking iteratively.
The importance of human judgment: Despite ChatGPT's mathematical prowess, the author exercised critical judgment—verifying proofs, checking literature, and recognizing when AI-generated references were unreliable.
The evolution of expertise: The author's embarrassment at initially accepting ChatGPT's unnecessarily complex proof without seeking a simpler alternative highlights how AI tools might inadvertently discourage deeper engagement with problems.
The article concludes with a reflection on the nature of questions in mathematics and beyond. Drawing an analogy to brainstorming sessions where absurd ideas sometimes lead to breakthroughs, the author argues that "if you've got some sort of vague itch that causes you to ask a stupid question, don't neglect the itch just because its initial expression was stupid. Follow that itch, and scratch that question! You may end up with a much better question."
In an era where AI tools are increasingly integrated into creative and technical workflows, this perspective offers a refreshing counterpoint to discussions about AI replacing human intelligence. Instead, it suggests a future where AI serves as an infinitely patient collaborator—one that doesn't judge questions but helps refine them, potentially unlocking new avenues of discovery that emerge from the seemingly foolish.
As the author notes, "The way to find things out is to ask a lot of questions. Ask enough questions, and you're likely to find a new answer: new to you, and once in a while, new to others."
The article also points to broader implications for education and research. If AI can tolerate questions that humans might dismiss as stupid, we may need to rethink how we teach questioning itself—perhaps placing less emphasis on asking the "right" questions immediately and more on developing the persistence to refine ideas through iterative dialogue.
Ultimately, the story demonstrates that the most valuable AI-assisted discoveries may not come from perfectly formed queries but from the willingness to explore intellectual terrain through persistent, sometimes awkward, conversation—even when that conversation happens between a mathematician and an AI while doing leg lifts.

Comments
Please log in or register to join the discussion