Top Claude Code Tips for Productivity

Most developers who try Claude Code end up using a fraction of what it can do. The engineers getting the most out of it tend to approach it differently, treating it as a development environment rather than a chat interface.
Here are the practices that make the biggest difference, drawn from Anthropic's own teams and experienced power users.
1. Master Context Window Management
The single most important constraint in Claude Code is the context window. It holds your entire session: every message, every file Claude reads, every command output. As it fills, performance degrades, and Claude may start forgetting earlier instructions or making more mistakes. Think of it as your most precious resource.
Use /clear often. Every time you start a new task, clear the chat. Old history eats into your token budget and forces Claude to run expensive compaction summaries.
Watch your usage. Track context consumption and start winding down or compacting sessions before they hit 80–90% capacity. At 90%+, responses can become erratic.
Use @ references to scope your task. Point Claude at only the files relevant to what you're currently doing. If you're fixing a bug in the auth module, there's no reason for Claude to know about your payment service. The narrower the context, the more focused and accurate the output.
2. Always Use Plan Mode Before Writing Code
The most common mistake people make with Claude Code is jumping straight to implementation. Plan Mode (activated by pressing Shift+Tab twice) puts Claude into an "architect" state where it can analyse your codebase, explore solutions, and produce a detailed plan, but cannot change any files until you approve.
Ask Claude to explore the problem space first: "I want to build X. Can you explore three approaches starting with the simplest?" Only once you've agreed on a direction should you let it start coding. Every experienced user says the same thing: time spent planning multiplies the quality of what comes out.
Pro tip: After agreeing on an approach, ask Claude to produce a spec and a phased to-do list before writing a single line. You stay in control of scope at every milestone.
3. Set Up a CLAUDE.md for Every Project
CLAUDE.md is Claude's memory bank for your project. It loads automatically at the start of every session and tells Claude everything it needs to know: your tech stack, coding conventions, preferred libraries, test commands, and anything else that would otherwise need repeating.
There's no required format, the power is in being specific. If Claude makes a mistake during a session, don't just fix the code. Add a rule to CLAUDE.md so it never happens again. Over time, the file becomes an ever-sharpening set of instructions tuned to your exact codebase.
Be careful with @ references in CLAUDE.md. Anything you @-import there gets loaded into context automatically at the start of every session, whether it's needed or not. A common trap is loading code style guidelines — indentation rules, naming conventions, formatting preferences. These bloat your context on every session and eat into the instructions Claude can reliably follow. Use a linter instead, or set up a hook that runs your formatter automatically. Reserve CLAUDE.md for context Claude genuinely can't infer on its own.
Advanced pattern: Place topic-specific rule files in .claude/rules/ with path frontmatter so TypeScript rules only load for .tsfiles, Go rules for .go files, and so on, keeping your main CLAUDE.md lean and fast.
--- paths: ["**/*.ts"] ---
# TypeScript conventions
Prefer interfaces over types.
Always use strict null checks.
4. Give Claude a Feedback Loop
The best Claude Code sessions have one thing in common: Claude can see the results of its own work. Include test commands, linters, or expected outputs directly in your prompt and Claude will run them, see what fails, and fix it, without you needing to step in. Boris Cherny, the creator of Claude Code, puts the quality improvement from this alone at 2–3x.
There are a few ways to extend this further. For UI work, connecting the Playwright MCP server lets Claude open a browser, interact with your app, and verify the interface actually behaves as expected, catching issues that unit tests miss entirely.
For code quality, LSP plugins (TypeScript, Pyright, rust-analyzer and others) give Claude automatic diagnostics after every file edit, so type errors and unused imports get caught and fixed before you've even noticed them.
For longer-running tasks like full test suites or research jobs, background subagents let Claude keep working autonomously while you get on with something else, returning results when done, each running in its own isolated Git worktree so nothing interferes. For example: "Run the full test suite in a background agent and let me know what fails". Claude spins up a separate agent, runs the tests, and reports back when it's done.
5. Challenge Claude and Hold It Accountable
Most developers treat Claude Code passively: give it a task, accept the output. Power users do the opposite. They push back, interrogate, and demand proof.
Try prompts like:
- "Grill me on these changes and don't make a PR until I pass your test."
- "Prove to me this works."
- "Knowing everything you know now, scrap this and implement the elegant solution."
This reframes the dynamic. Claude stops being a generator and starts being a collaborator with skin in the game. It surfaces edge cases, challenges assumptions, and produces work that's genuinely production-ready rather than superficially correct.
6. Use @ File References Strategically
Rather than letting Claude read everything, be deliberate about what context you hand it. Use @filename to point Claude to the specific files it needs for the current task. This keeps the context window lean and ensures Claude is reasoning about relevant code rather than wading through your entire codebase.
The key principle: treat @ references as "here's more context if you need it" rather than loading everything upfront. Claude will read the file when it's actually relevant.
7. Run Parallel Claude Instances
Claude Code isn't limited to one session at a time. You can open multiple instances in different terminal panes or IDE windows, each tackling a separate part of the codebase simultaneously, one refactoring a module while another writes tests, for example.
For isolation, use Git worktrees. Each worktree gets its own directory, its own Claude session, and its own git state, so parallel agents can't trample each other's file edits or create impossible merge states. When each task is done, merge through normal git workflows.
# Ask Claude to set this up (you don't need to know the Git syntax)
"Create a git worktree for the auth refactor and start working there"
This is one of the biggest workflow unlocks for experienced users, and something traditional IDEs simply can't match.
8. Build Custom Slash Commands for Repeated Workflows
Anything you find yourself describing to Claude repeatedly is a candidate for a slash command. By placing markdown files in .claude/commands/, you create reusable, triggerable workflows like code review checklists, deployment prep steps, onboarding routines, whatever your team does over and over.
Use $ARGUMENTS as a dynamic placeholder to keep commands flexible:
# .claude/commands/review.md
Review the code in $ARGUMENTS. Check for:
- Logic errors and edge cases
- Security vulnerabilities
- Missing error handling
...etc
Then trigger it with /review src/auth/login.ts.
Because these files can live in your repo's .claude/ directory, the whole team benefits the moment someone creates a useful command. It becomes a shared, version-controlled library of AI-powered workflows that gets smarter over time.
Where to go from here
These eight tips are a solid foundation, but they're really just the start of a deeper shift in how you work. The gap between developers who use AI and those who've built a proper system around it is growing fast, and it shows up in output quality, review confidence, and how much time you spend fixing things that shouldn't have shipped.
That's exactly what Unlearn is built around. It's a platform for developers who want to go beyond tips and build the kind of workflow that compounds, covering everything from speccing features before you prompt, to reviewing AI-written code with the instincts of a senior engineer, to building reusable agents and MCP integrations that make every session better than the last.
If AI is already part of how you work, Unlearn is where you build the system around it.
