StillMe: A Transparency‑First Framework for LLM Responses

In a crowded space of LLM‑powered chatbots, a new open‑source project from Vietnam, StillMe, seeks to change the conversation from performance to accountability. The system is built on a simple premise: every answer a user receives should be traceable, verifiable, and honest about its own uncertainty.

How StillMe Works

At its core, StillMe is a thin wrapper that sits between a user query and the underlying LLM (OpenAI, DeepSeek, Ollama, etc.). It performs the following steps for each request:

  1. Intent Detection – The system classifies the query as philosophical, technical, or factual, then routes the appropriate retrieval‑augmented‑generation (RAG) context.
  2. Safe Prompt Construction – Tokens are trimmed to stay within model limits; language is sanitized for safety.
  3. LLM Inference – The wrapped prompt is sent to the chosen model.
  4. Validator Chain – A series of checks run on the raw output:
    • CitationRequired – Ensures every claim is backed by a citation or marked with [foundational knowledge].
    • EvidenceOverlap – Compares the answer against retrieved documents when available.
    • EgoNeutrality – Strips anthropomorphic phrasing.
    • SourceConsensus – Detects contradictions between multiple sources.
    • EthicsAdapter – Filters unsafe or misleading content while preserving honesty.
  5. Structured Logging – Each step, including latency for RAG, inference, and validation, is logged in a machine‑readable format.
StillMe philosophical query trace (real backend log excerpt)
[INFO] Philosophical question detected — filtering out technical RAG docs
[INFO] Retrieved 3 foundational knowledge documents (RAG cache HIT)
[WARNING] Estimated tokens exceed safe limit — switching to minimal philosophical prompt
[WARNING] Missing citation detected — auto-patched with [foundational knowledge]
[WARNING] Ego-Neutrality Validator removed anthropomorphic term: ['trải nghiệm']
--- LATENCY --- RAG: 3.30s | LLM: 5.41s | Total: 12.04s

The log example, posted on Hacker News, illustrates the level of transparency StillMe offers—every decision point is visible, and the system explicitly flags when it had to patch a missing citation.

Why Transparency Matters

Current commercial LLMs often hide their internal reasoning. Citations are either absent or fabricated, confidence scores are inflated, and the models may refuse to admit uncertainty. For developers building safety‑critical or regulated systems, this opacity can be a liability.

StillMe tackles the problem on several fronts:

  • Epistemic Honesty – The system treats “I don’t know” as a valid response, avoiding overconfidence.
  • Model‑agnosticism – No fine‑tuning is required; the framework works with any text‑generation model.
  • Observability – Logs provide a complete audit trail, useful for debugging and compliance.

These features make StillMe a potential tool for teams that need to reconcile the speed of LLMs with the rigor of regulated industries.

Community and Future Work

The project is hosted on GitHub (https://github.com/anhmtk/StillMe-Learning-AI-System-RAG-Foundation) and is already running as a backend with a dashboard. The author invites contributions on:

  • Validator architecture improvements
  • Log structuring and observability tooling
  • Contributor onboarding and documentation
  • Stress‑testing the honesty and transparency claims

The open‑source nature also allows developers to experiment with different validator combinations, tailoring the balance between safety and performance to their own use cases.

Takeaway

StillMe represents a thoughtful step toward making LLMs more accountable. By layering validation, enforcing citations, and exposing every internal decision, it offers a blueprint for developers who cannot afford to rely on opaque AI. Whether the community adopts the framework remains to be seen, but the conversation it sparks about epistemic honesty and system observability is already valuable.

*Source: Hacker News discussion on "StillMe – a transparency-first LLM framework" (https://news.ycombinator.com/item?id=46215213).