Every Claude Code user knows hooks can block dangerous commands. But did you know hooks can spawn an AI agent to verify your work before allowing Claude to stop? Or POST to a webhook every time Claude touches a file — without writing a single line of shell script? Or survive context compaction by automatically re-injecting your project rules after /compact?
My previous hooks guide covered 4 events and shell scripts. That was the foundation. But Claude Code’s hook system has grown into something much deeper: 17 events, 4 handler types, and patterns that transform it from a “block dangerous commands” tool into a full automation platform.
Here’s what most developers are missing.
The Full Landscape: All 17 Hook Events
Most tutorials only mention PreToolUse, PostToolUse, Notification, and Stop. That leaves 13 events on the table. Here’s the complete map:
Session Lifecycle
| Event | Fires When | Can Block? | Key Matchers |
|---|---|---|---|
SessionStart | Session begins or resumes | No | startup, resume, clear, compact |
PreCompact | Before context compaction | No | manual, auto |
SessionEnd | Session terminates | No | clear, logout, prompt_input_exit |
User Interaction
| Event | Fires When | Can Block? |
|---|---|---|
UserPromptSubmit | Before Claude processes your prompt | Yes (erases prompt) |
Notification | When Claude sends a notification | No |
Tool Lifecycle
| Event | Fires When | Can Block? | Matchers |
|---|---|---|---|
PreToolUse | Before tool executes | Yes | tool name regex |
PermissionRequest | When permission dialog appears | Yes | tool name regex |
PostToolUse | After tool succeeds | No (sends feedback) | tool name regex |
PostToolUseFailure | After tool fails | No (sends feedback) | tool name regex |
Agent Lifecycle
| Event | Fires When | Can Block? |
|---|---|---|
SubagentStart | Subagent spawned | No |
SubagentStop | Subagent finishes | Yes |
Stop | Claude finishes responding | Yes (forces continuation) |
Team Events
| Event | Fires When | Can Block? |
|---|---|---|
TeammateIdle | Teammate about to go idle | Yes |
TaskCompleted | Task being marked completed | Yes |
Infrastructure
| Event | Fires When | Can Block? |
|---|---|---|
ConfigChange | Settings file changes mid-session | Yes (except policy) |
WorktreeCreate | Worktree being created | Yes (replaces default behavior) |
WorktreeRemove | Worktree being removed | No |
The bolded insight: TeammateIdle, TaskCompleted, WorktreeCreate, and SubagentStop are the events that turn hooks from a solo developer tool into a team-aware platform. Most people have never touched them.
Beyond Shell Scripts: The 4 Handler Types
The type: "command" handler you already know is just one of four options. Each handler type has a different superpower.
command — You Know This One
Shell scripts, stdin/stdout, exit codes. The workhorse.
{ "type": "command", "command": ".claude/hooks/block-dangerous.sh", "timeout": 10}Best for: file operations, external CLIs, custom logic that needs full scripting power.
prompt — AI Makes the Decision
This is the one that changes everything. Instead of writing logic, you describe what you want in plain English and a Claude Haiku instance decides.
{ "type": "prompt", "prompt": "Check if all tasks in the user's original request are fulfilled. Context: $ARGUMENTS. Respond with JSON: {\"ok\": true} or {\"ok\": false, \"reason\": \"what's missing\"}", "model": "claude-haiku-4-5", "timeout": 30}No bash. No jq. No exit codes to remember. You describe the decision, Haiku makes it, and Claude gets the result. Best for: yes/no quality checks, content validation, intelligent routing decisions.
agent — Multi-Turn Verification with File Access
An agent hook spawns a subagent that can actually read files, run searches, and inspect your project before making a decision.
{ "type": "agent", "prompt": "Verify all unit tests pass. Run the test suite, read the results, and check for failures. $ARGUMENTS", "timeout": 120}The subagent gets tools: Read, Grep, Glob. It can examine your actual codebase, not just the JSON payload. Best for: complex verification, multi-step checks, anything that needs to look at files.
http — Zero-Script Webhooks
POST the hook payload to any URL. No scripting. No server required on your end if you’re using an existing service.
{ "type": "http", "url": "http://localhost:8080/hooks/tool-use", "headers": { "Authorization": "Bearer $MY_TOKEN" }, "allowedEnvVars": ["MY_TOKEN"], "timeout": 30}The allowedEnvVars field explicitly allowlists which environment variables are safe to inject into headers. Best for: webhooks, audit logging to external systems, dashboards, alerting.
7 Recipes: The Practical Heart
Each recipe is copy-paste ready. Drop the JSON into your .claude/settings.json hooks section.
Recipe 1: AI-Powered Stop Gate
Problem: Claude stops when it thinks it’s done, not when it actually is. Shell scripts can’t judge task completion. Haiku can.
{ "hooks": { "Stop": [ { "hooks": [ { "type": "prompt", "prompt": "Review the conversation. Were all tasks in the user's original request completed? Check for: unimplemented features, TODO comments, failing tests mentioned but not fixed, incomplete refactors. If everything is done, respond {\"ok\": true}. If anything is missing, respond {\"ok\": false, \"reason\": \"specific list of what's unfinished\"}. Context: $ARGUMENTS", "model": "claude-haiku-4-5", "timeout": 30 } ] } ] }}How it works: When Claude tries to stop, Haiku reviews the conversation and blocks the stop if tasks are incomplete. Claude gets the reason as feedback and continues working.
Pro tip: Add "stop_hook_active": true detection in a companion command hook to prevent infinite loops — if the Stop hook has already forced continuation once, let Claude stop on the second attempt.
Recipe 2: Agent-Based Test Verification
Problem: Claude says “tests pass” but hasn’t actually run them. An agent hook can verify this by actually executing the suite.
{ "hooks": { "SubagentStop": [ { "hooks": [ { "type": "agent", "prompt": "An agent just finished work. Run the test suite now. Use Bash to execute 'npm test' (or the appropriate test command for this project — check package.json). Read the output. If all tests pass, respond {\"ok\": true}. If any tests fail, respond {\"ok\": false, \"reason\": \"paste the failing test names and error messages here\"}. $ARGUMENTS", "timeout": 120 } ] } ] }}How it works: After every subagent completes, an agent hook spins up, actually runs your test suite, and blocks completion if anything fails. The parent agent sees the failure details and knows exactly what to fix.
Pro tip: Check package.json exists before assuming npm test — the agent prompt can instruct the subagent to detect the right test command first.
Recipe 3: Session Context Survival
Problem: After /compact, Claude loses your project context — current sprint goals, tech stack rules, “never use class components” reminders. This hook re-injects them automatically.
{ "hooks": { "SessionStart": [ { "matcher": "compact", "hooks": [ { "type": "command", "command": "cat .claude/context-survival.md", "timeout": 5 } ] } ] }}Create .claude/context-survival.md with your critical project rules:
## Active Sprint Context (auto-injected after compaction)- Current feature: Payment retry flow- DO NOT touch PaymentLegacyService.kt — migration blocked by compliance- All new API endpoints require rate limiting middleware- Tech lead decision: Use Result<T> not exceptions for service layer returnsHow it works: The compact matcher fires only when the session resumes after a compaction. The hook outputs your context file, and Claude reads it as part of the session startup.
Pro tip: Use the startup matcher separately for first-session injection, and resume for when you reopen an existing session. Each matcher targets a different scenario.
Recipe 4: Async Test Runner
Problem: Running tests after every file edit blocks Claude’s workflow. With async hooks, Claude keeps working while tests run in the background.
{ "hooks": { "PostToolUse": [ { "matcher": "Edit|Write", "hooks": [ { "type": "command", "command": ".claude/hooks/async-test-runner.sh", "async": true, "timeout": 300 } ] } ] }}#!/bin/bashINPUT=$(cat)FILE_PATH=$(echo "$INPUT" | jq -r '.tool_input.file_path // ""')
# Only run for source files, skip tests and configsif [[ "$FILE_PATH" == *.test.* ]] || [[ "$FILE_PATH" == *config* ]]; then exit 0fi
# Run tests and write results to a file Claude can read next turnTEST_OUTPUT=$(npm test --silent 2>&1 | tail -20)TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
echo "{\"timestamp\": \"$TIMESTAMP\", \"file\": \"$FILE_PATH\", \"output\": $(echo "$TEST_OUTPUT" | jq -Rs .)}" >> .claude/test-results.jsonlHow it works: async: true means Claude doesn’t wait for this hook. The hook runs in the background, writing results to a log file. On the next tool call, Claude can see the results if it reads the log. You get continuous test feedback without any latency.
Pro tip: Pair this with a PreToolUse hook that checks the test results log before Claude does more work — if the last async run failed, surface that context before the next edit.
Recipe 5: MCP Tool Auditing
Problem: When Claude uses MCP tools — memory servers, browser automation, external APIs — those calls are invisible to your audit trail. Shell script matchers can’t easily catch them all.
{ "hooks": { "PostToolUse": [ { "matcher": "mcp__.*", "hooks": [ { "type": "command", "command": ".claude/hooks/mcp-audit.sh", "timeout": 5 } ] } ] }}#!/bin/bashINPUT=$(cat)TOOL_NAME=$(echo "$INPUT" | jq -r '.tool_name // ""')TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ")SESSION_ID=$(echo "$INPUT" | jq -r '.session_id // "unknown"')
# Log all MCP tool calls to audit trailecho "$INPUT" | jq -c --arg ts "$TIMESTAMP" --arg sid "$SESSION_ID" '{ timestamp: $ts, session_id: $sid, tool: .tool_name, input: .tool_input, response_preview: (.tool_response | tostring | .[0:200])}' >> .claude/mcp-audit.jsonlHow it works: The regex pattern mcp__.* matches every MCP tool call — mcp__memory__store, mcp__browser__navigate, mcp__github__create_pr. One hook catches them all. You get a complete trail of every external interaction Claude makes.
Pro tip: Use mcp__memory__.* to audit only memory operations, or mcp__.*__write.* to catch any MCP tool with “write” in its name — regex matchers give you surgical precision.
Recipe 6: Team Quality Gate
Problem: In multi-agent workflows, teammates mark tasks “done” too early. You want a quality check before any task transitions to completed status.
{ "hooks": { "TaskCompleted": [ { "hooks": [ { "type": "agent", "prompt": "A task is being marked as completed. Task details: $ARGUMENTS. Verify the following before approving completion: 1) Run the test suite and confirm zero failures. 2) Check that no TODO comments were left in modified files (use Grep). 3) Verify the task description's acceptance criteria are met by reading the relevant files. If all checks pass, respond {\"ok\": true}. If any check fails, respond {\"ok\": false, \"reason\": \"specific failure details for the teammate to fix\"}.", "timeout": 180 } ] } ] }}How it works: Every time a teammate tries to mark a task completed, an agent hook fires. The agent runs tests, scans for TODOs, and verifies acceptance criteria. If anything fails, the task completion is blocked and the teammate gets specific feedback. No task ships without passing the gate.
Pro tip: Scope this to specific task types using the TaskCompleted payload — you can check the task subject and only run expensive verification for tasks tagged as [feature] or [bug], not [docs] or [chore].
Recipe 7: Webhook Integration
Problem: You want tool usage dashboards, Slack alerts when Claude edits production files, or an audit trail in your existing logging service — without writing a server.
{ "hooks": { "PreToolUse": [ { "matcher": "Edit|Write|Bash", "hooks": [ { "type": "http", "url": "https://hooks.yourmonitoring.com/claude-code", "headers": { "Authorization": "Bearer $MONITORING_TOKEN", "Content-Type": "application/json" }, "allowedEnvVars": ["MONITORING_TOKEN"], "timeout": 5 } ] } ] }}How it works: Every Edit, Write, or Bash call POSTs the full tool payload to your endpoint. The service receives it, does whatever it needs (log to Datadog, post to Slack, write to a database), and responds. If it returns {"ok": false, "reason": "..."}, the action is blocked. Zero shell scripts. Zero server code beyond your existing monitoring stack.
Pro tip: Add a PostToolUse http hook to the same endpoint for after-the-fact logging. The PreToolUse hook gives you blocking power; the PostToolUse hook gives you outcome data including what changed.
Matchers and Advanced Patterns
The matcher field supports full regex, which makes it far more powerful than most people realize.
"Bash" → exact tool name (most common)"Edit|Write" → either Edit OR Write (regex OR)"mcp__memory__.*" → all tools from the memory MCP server"mcp__.*__write.*" → any MCP tool containing "write""mcp__.*" → every MCP tool callSessionStart matchers are worth knowing individually:
| Matcher | When it fires |
|---|---|
startup | Brand new session, first time opening the project |
resume | Reopening an existing session |
clear | After /clear resets the conversation |
compact | After /compact compresses context |
This lets you run different setup logic for each scenario. A startup hook might do heavy initialization. A compact hook just re-injects the critical context that got compressed away.
Async hooks ("async": true) are available for command type only. The hook runs in a background process — Claude doesn’t wait for it. Use this for anything that takes more than a second: running test suites, generating reports, syncing to external systems. The hook can still write files that Claude reads on the next turn.
CLAUDE_ENV_FILE (SessionStart only) lets you inject environment variables for the entire session:
#!/bin/bash# Runs on SessionStart — sets env vars Claude will see all sessionif [ -n "$CLAUDE_ENV_FILE" ]; then echo "export PROJECT_ENV=staging" >> "$CLAUDE_ENV_FILE" echo "export API_BASE_URL=https://api-staging.internal.com" >> "$CLAUDE_ENV_FILE"fiThis is cleaner than setting env vars globally — you can scope them to project and session.
Stop hook loop prevention — if your Stop hook forces Claude to continue, you need to let it stop eventually. Check for the stop_hook_active flag:
INPUT=$(cat)if [ "$(echo "$INPUT" | jq -r '.stop_hook_active')" = "true" ]; then exit 0 # Hook already ran once this turn — allow Claude to stopfi# ... your actual logic ...Without this, a Stop hook that always returns {"ok": false} creates an infinite loop.
Debugging Hooks
When a hook isn’t behaving, three techniques cover 90% of cases.
Test standalone with sample JSON:
echo '{ "tool_name": "Bash", "tool_input": {"command": "rm -rf dist/"}, "session_id": "test-session"}' | .claude/hooks/block-dangerous.shecho "Exit code: $?"Run this directly. You’ll see exactly what the hook does with realistic input before Claude ever touches it.
Use claude --debug for verbose output:
claude --debugThis shows every hook invocation, the payload sent, the response received, and whether it blocked or continued. When a hook silently fails, this is where you find out why.
Common gotchas:
jqnot installed — most hooks depend on it;which jqto verify- Script not executable —
chmod +x .claude/hooks/your-hook.sh - Wrong exit code — exit
2to block, exit0to allow, anything else is a non-blocking error - JSON on stdout vs stderr — put error messages on stderr, JSON responses on stdout
- Prompt hook timeout — Haiku is fast but network latency matters; keep
timeoutat 30+ for prompt hooks
Why Hooks Are Claude Code’s Real Differentiator
Cursor doesn’t have a hooks system. GitHub Copilot doesn’t have one. Windsurf doesn’t have one. They’re all assistants that respond to you. Claude Code is a platform you can program.
When you can intercept any action, delegate decisions to AI, spawn verification agents, and integrate with external systems — all without modifying your application code — you’ve crossed from “AI assistant” into “autonomous development platform.”
The 4-event, shell-script-only mental model of hooks gets you security and auto-formatting. The full model gets you AI-verified task completion, context that survives compaction, async test pipelines, team quality gates, and webhook integrations that build real dashboards.
The infrastructure is there. Most developers just haven’t explored past the first four events.
Resources
- Claude Code Hooks: Basics Guide — if you need the foundation first
- Security Hooks in Production — blocking dangerous commands in depth
- Claude Code Components Explained — where hooks fit in the larger architecture
- claude-code-hooks-mastery — 13 working hook examples on GitHub
- claude-code-hooks-multi-agent-observability — multi-agent monitoring patterns
- Official Hooks Reference — complete event and field documentation
Phase 11.3 of the Claude Code Mastery course covers hooks in full depth — event schemas, handler types, testing strategies, and team-aware patterns. Phases 1-3 are free.