Sepehr Khosravi explores the current state of AI-assisted coding, comparing Cursor and Claude Code while sharing practical tips for maximizing productivity gains through context engineering and tool selection.
Sepehr Khosravi discusses the current state of AI-assisted coding, moving beyond basic autocompletion to sophisticated agentic workflows. He explains the technical nuances of Cursor's "Composer" and Claude Code's research capabilities, providing tips for managing context windows and MCP integrations. He shares lessons from industry leaders on shrinking process time beyond just writing code.
Current State of Developer Productivity
A Stanford study of over 100,000 employees found that AI generates 30-40% more code, but 15-25% of that code requires rework due to bugs or deletion. The net productivity gain is approximately 15-20%. However, Khosravi believes this can be higher with proper tool usage.
Categories of AI Tools
Khosravi identifies three categories of AI tools:
- All-in-one, non-developer friendly tools - Where 100x productivity gains are possible, particularly for beginners building simple applications
- IDE layer tools - Built on foundational LLM models like Copilot, Cursor, Windsurf, IntelliJ, Cline, and Google Antigravity
- Terminal-based CLI tools - Created by foundational models themselves, including Claude, ChatGPT, and Google's offerings
Cursor: Top 14 Tips
1. Tab Completion
Cursor's specialized model excels at tab completion, suggesting 10-20 lines of code based on recent changes, linting, and accepted edits.
2. Cursor Agent
Choose from various models (Gemini, ChatGPT, Claude) with built-in tooling for file reading, web search, terminal access, and MCP integrations.
3. Multi-Agent Mode
Generate multiple responses to the same prompt using different models simultaneously.
4. Shift Tab Modes
- Agent mode: Default mode for making changes
- Ask mode: For understanding codebase without making changes
- Plan mode: Generates detailed plans before execution
5. Cursor Sound
Turn on audio feedback to reduce wait-time anxiety during code generation.
6. Custom Commands
Create repeatable commands using README files for consistent workflows like PR creation.
7. Rules
Use .mdc files to create rules that automatically apply to chats or specific files:
- Always apply: Rules that trigger on every chat
- Intelligent apply: Agent decides when to apply based on context
- Specific files: Rules that activate only for certain file types
- Manual apply: Functions like custom commands
8. Project/User/Team Rules
Share rules across teams while maintaining personal customizations.
9. AGENTS.md Format
Standardized format adopted by multiple tools (Codex, Cursor, Gemini CLI, Copilot) for cross-tool compatibility.
10. Context Window Management
Open new agent chats for different tasks to avoid context deterioration.
11. Cursor Checkpoints
Restore previous states in chat conversations when AI goes off-track.
12. Slack Integration
Tag Cursor in Slack for quick code changes without leaving the chat interface.
13. Cursor Browser
Live app viewing next to agent for console log and network traffic access.
14. YOLO Mode
Auto-accept mode for rapid iteration (use with caution).
Claude Code vs. Cursor: Real-World Comparison
Khosravi tested both tools on the same prompt for implementing a feature:
- Cursor: Selected a single solution and executed it, but produced non-optimal design
- Claude Code: Searched web for open-source repos, presented three options with pros/cons, saved significant time
Key Differences:
- Claude Code excels at: Complex features, research, deep analysis
- Cursor excels at: Quick outputs, speed with Composer model, visual interface
Claude Code Core Features
Skills
Auto-invoked rules that trigger based on specific contexts (similar to Cursor's intelligent apply rules).
Subagents
Explicit workflows with dedicated context windows and specific MCP access:
- PagerDuty subagent: Monitors pages, investigates logs, finds root causes
- Documentation subagent: Updates docs based on PR changes
- Project management subagent: Tracks task completion across teams
Commands
Similar to Cursor's custom commands for repeatable workflows.
Plugins
Bundle skills, agents, and commands into distributable packages for team use.
Alternative Tools
Cline
- Cheaper than Cursor
- Better results for some users
- Lacks Cursor's Composer model and indexing capabilities
Codex
- Strong performance for some users
- Better at explaining reasoning than Claude Code
- More confident but potentially less accurate
Google Antigravity
- First foundational model-built IDE
- Access to multiple models (Gemini, ChatGPT, Claude)
- Promising new approach
Beyond Coding: Documentation and More
DeepWiki
AI-generated documentation for any repository with 20,000+ indexed repos.
AI Code Reviewers
Tools like CodeRabbit catch syntax errors, styling issues, and enforce PR formats.
Low-Code Tools
Tools like Lovable and n8n empower non-developers to build AI automations and workflows.
Evaluating Impact and Costs
Metrics
No perfect metric exists, but track multiple data points to support qualitative experiences.
Cost Strategy
- Overspend initially for 6 months
- Evaluate gains before adjusting
- Consider Kimi model for low-cost, high-quality outputs
Task-Specific Effectiveness
- Greenfield tasks: High productivity gains
- Brownfield tasks: Variable effectiveness
- Popular languages: Better performance (Python, Java)
- Legacy languages: More challenging
Lessons from Databricks CEO
Ali Ghodzi shared how one employee reduced connector development time from 4 quarters to 7 connectors per quarter (28x improvement) by:
- Bringing fresh perspectives to challenge assumptions
- Empowering yaysayers over naysayers for AI initiatives
- Treating software as actual software - reducing PM overhead
- Shrinking process time, not just code writing time
- Reassessing all previously made assumptions
AI's Imperfections
- Unintended changes
- Suboptimal design
- Hallucinations
- Skill erosion
- Security threats
- Dependency risks
Despite these issues, the productivity gains typically outweigh the tradeoffs.
Key Takeaways
- Look beyond coding tasks for productivity improvements
- Try both AI-powered IDEs and CLI tools
- Experiment with rules and skills for repetitive tasks
- Continuously reassess workflow assumptions
- Consider the broader impact on team productivity and processes
The presentation concludes with practical advice for developers at all levels to maximize their AI productivity gains while being mindful of the tools' limitations and best practices for implementation.

Comments
Please log in or register to join the discussion