Domo’s chief design officer, Chris Willis, warns that the current AI hype is driving costly “tokenmaxxing” and superficial projects. He urges leaders to treat AI as a tool, start with clear business needs, and adopt a measured rollout rather than chasing fear‑driven urgency.
Enough with the AI FOMO, go slow‑mo, says Domo CDO

Chris Willis, chief design officer and futurist at Domo, explains why the AI rush is more theater than transformation and outlines a pragmatic path forward.
The problem: hype‑driven urgency
Willis observed that senior executives across the C‑suite feel a ticking clock: “Everyone thinks the clock is ticking and their careers are on the line.” The pressure to adopt large‑language models (LLMs) has turned AI into a marketing prop rather than a solution to a defined problem.
He calls the current climate AI FOMO – a fear‑of‑missing‑out that fuels tokenmaxxing, the practice of buying massive token allowances and forcing teams to consume them regardless of business value. The result is a parade of proof‑of‑concepts that never graduate to production.
“It’s not an innovation problem, it’s an impatience problem,” Willis said.
Why the panic is misplaced
Models lack a spec – LLMs are sold with a blanket promise: "It’ll do anything for anyone, in any language." Without a clear feature list, leaders cannot match the technology to a concrete need.
Innovation is not instantaneous – Real change requires a deep understanding of existing processes, data flows, and decision points. Throwing a powerful engine at a dark room does not illuminate the path forward.
Budget scrutiny is arriving – CFOs are questioning the ROI of token‑heavy contracts that show no measurable impact on the bottom line.
A measured, compliance‑focused approach
Willis recommends a four‑step framework that aligns AI projects with governance and risk controls:
| Step | Action | Compliance focus |
|---|---|---|
| 1. Identify business need | Map a specific pain point (e.g., invoice discrepancy detection). | Document the business justification and expected outcome. |
| 2. Define scope and limits | Specify what the model will do, and explicitly what it will not do. | Create a model‑usage policy that outlines data categories, retention periods, and access rights. |
| 3. Pilot with human‑in‑the‑loop | Deploy a lightweight app that surfaces anomalies for human review. | Conduct a risk assessment, log decisions, and retain audit trails for regulator review. |
| 4. Measure, review, scale | Track key metrics (accuracy, time saved, cost per token). | Verify that the pilot meets the documented business case before expanding. |
By treating AI as a tool rather than a solution, organizations can embed the necessary controls early and avoid the costly re‑engineering that often follows a failed hype‑driven rollout.
Real‑world example: invoice‑review bot
Willis described a recent engagement where Domo built a simple web app that ingested a spreadsheet of invoices, applied a rule‑based LLM to flag outliers, and presented the flagged items to a finance analyst. The client reported a 30 % reduction in manual review time and, crucially, could trace every decision back to the original data source – a requirement for both internal audit and external regulators.
Lessons from a failed AI‑only customer service
Swedish fintech Klarna attempted to replace its support staff with a chatbot. Within weeks, the bot’s inability to handle nuanced queries led to a rollback to human agents, and the company eventually reinstated a hybrid model. Willis highlights the lesson:
“No customer ever just wants to talk to a chatbot. The tool must augment, not replace, human judgment.”
What leaders should do today
- Stop tokenmaxxing – Align token purchases with a defined pilot scope; avoid blanket contracts that encourage waste.
- Start small – Automate a spreadsheet‑driven workflow before tackling complex, unstructured tasks.
- Document everything – Maintain a living record of model inputs, outputs, and decision rationale to satisfy audit and compliance requirements.
- Engage the CFO early – Present a clear cost‑benefit analysis that ties token spend to measurable outcomes.
The coming reckoning
Willis predicts a near‑term “budget reckoning” as finance leaders demand evidence of value. Companies that have built robust governance, clear use‑case definitions, and human‑in‑the‑loop controls will be positioned to continue investing in AI responsibly. Those that chased hype without a foundation will face cutbacks and potential regulatory scrutiny.
For further reading on responsible AI deployment, see the NIST AI Risk Management Framework and the upcoming EU AI Act guidelines.

Comments
Please log in or register to join the discussion