Code Review Protocol
Module 10.3: Code Review Protocol
Section titled “Module 10.3: Code Review Protocol”Estimated time: ~30 minutes
Prerequisite: Module 10.2 (Git Conventions)
Outcome: After this module, you will have a code review protocol that accounts for AI-generated code, know how to use Claude as a review assistant, and understand both author and reviewer responsibilities.
1. WHY — Why This Matters
Section titled “1. WHY — Why This Matters”Developer submits PR with 500 lines of Claude-generated code. Reviewer skims it — “looks clean, AI wrote it, probably fine.” Ships to production. Bug discovered a week later: AI missed an edge case that was implied but not explicit in the requirements. Nobody caught it because both author and reviewer assumed AI was thorough.
AI-generated code requires MORE scrutiny, not less. This module establishes the “trust but verify” protocol for AI-assisted PRs.
2. CONCEPT — Core Ideas
Section titled “2. CONCEPT — Core Ideas”The AI Code Review Paradox
Section titled “The AI Code Review Paradox”AI code often LOOKS cleaner than human code. But it can miss:
- Implicit requirements not stated in the prompt
- Context from verbal discussions or past decisions
- Edge cases that “everyone knows” but weren’t mentioned
- Integration patterns from other parts of the codebase
Reviewers let their guard down because it “looks professional.” This is dangerous.
AI-Specific Review Checklist
Section titled “AI-Specific Review Checklist”| Check | Why | Example Issue |
|---|---|---|
| Requirements match | AI may misunderstand | Implemented login, not SSO as discussed |
| Edge cases covered | AI handles explicit, misses implicit | No null check for optional field |
| Context awareness | AI doesn’t know verbal decisions | Used approach rejected in standup |
| Integration fit | AI sees file, not system | New pattern inconsistent with existing |
| Security considered | AI may not prioritize security | SQL built with string concat |
Author Responsibilities
Section titled “Author Responsibilities”When submitting AI-assisted PR:
- UNDERSTAND every line — if you can’t explain it, don’t submit it
- VERIFY against requirements — not just “it compiles”
- DISCLOSE AI assistance — use 🤖 marker
- HIGHLIGHT uncertainties — “Not sure if this matches our pattern”
- TEST thoroughly — don’t trust “I added tests”
Reviewer Responsibilities
Section titled “Reviewer Responsibilities”For AI-assisted PR:
- DON’T assume correctness — AI code can be subtly wrong
- CHECK requirements — does it solve the right problem?
- VERIFY patterns — does it match existing codebase?
- QUESTION author — can they explain the tricky parts?
- TEST edge cases — AI often misses implicit ones
Claude as Review Assistant
Section titled “Claude as Review Assistant”Use Claude to help review, but remember limits:
- “Review this diff for security issues”
- “What edge cases might this miss?”
- “Does this match patterns in [existing file]?”
BUT: Claude reviewing Claude has blind spots. Human judgment required.
3. DEMO — Step by Step
Section titled “3. DEMO — Step by Step”Scenario: Reviewing a PR for user authentication, generated with Claude Code.
Step 1: Author Self-Review Before Submitting
Section titled “Step 1: Author Self-Review Before Submitting”You: Before I submit this PR, review the authentication implementation.Check for:- Security issues- Missing edge cases- Inconsistencies with our auth patterns in src/auth/
Claude: POTENTIAL ISSUES FOUND:1. Password comparison uses == instead of timing-safe comparison2. No rate limiting on login attempts3. JWT expiry is 30 days (existing code uses 24 hours)4. Missing test for invalid token formatAuthor fixes issues BEFORE submitting.
Step 2: Author Submits with Disclosure
Section titled “Step 2: Author Submits with Disclosure”## PR Description
### WhatImplement user authentication with JWT
### AI Assistance🤖 Generated with Claude Code
### Areas for careful review- Token refresh logic (line 45-67) — unsure if matches our pattern- Error message format — Claude suggested, please verify
### Author Checklist- [x] I understand all code in this PR- [x] Tested locally with edge cases- [x] Verified against existing patternsStep 3: Reviewer Uses Claude
Section titled “Step 3: Reviewer Uses Claude”You: Review this auth PR diff for:- Security vulnerabilities- Missing edge cases- Inconsistencies with src/auth/
[paste diff]
Claude: OBSERVATIONS:- Line 34: Good - uses bcrypt.compare- Line 56: Question - rate limit is 100/hour, existing uses 10/minute- Line 78: Missing - no handling for expired refresh tokenStep 4: Reviewer Questions Author
Section titled “Step 4: Reviewer Questions Author”Reviewer comment:"Rate limit is 100/hour but existing code uses 10/minute.Was this intentional?"
Author response:"Good catch! That was Claude's suggestion. Should match existing. Fixed."Step 5: Final Human Review
Section titled “Step 5: Final Human Review”Reviewer:
- Manually tests edge cases
- Verifies author can explain complex sections
- Approves after human judgment, not just AI review
4. PRACTICE — Try It Yourself
Section titled “4. PRACTICE — Try It Yourself”Exercise 1: Pre-Submit Self-Review
Section titled “Exercise 1: Pre-Submit Self-Review”Goal: Catch issues before submitting.
Instructions:
- Create a small feature with Claude’s help
- Before submitting, ask Claude to review for issues
- Fix what Claude finds
- Document: what did Claude catch that you missed?
💡 Hint
Prompt: “Review this code for security issues, edge cases, and consistency with [existing file]“
Exercise 2: AI-Aware Review
Section titled “Exercise 2: AI-Aware Review”Goal: Practice the enhanced review checklist.
Instructions:
- Review a colleague’s PR (or an old PR of your own)
- Apply the AI-specific checklist
- Use Claude to assist
- Compare: what did Claude catch vs. what did you catch?
Exercise 3: Understanding Test
Section titled “Exercise 3: Understanding Test”Goal: Verify author comprehension.
Instructions:
- For AI-generated code, ask author to explain a complex section
- If they can’t explain it clearly, flag for revision
- Document the exchange
✅ Solution
Rule: “If you can’t explain it, don’t submit it.”
If author says “Claude wrote it, I’m not sure why” — that’s a red flag. Code should be revised until author understands it.
5. CHEAT SHEET
Section titled “5. CHEAT SHEET”AI-Specific Review Checklist
Section titled “AI-Specific Review Checklist”[ ] Requirements actually match (not just code quality)[ ] Edge cases covered (implicit ones too)[ ] Consistent with existing patterns[ ] Security considered[ ] Author can explain every lineAuthor Responsibilities
Section titled “Author Responsibilities”- Understand every line
- Verify vs. requirements
- Disclose AI assistance
- Highlight uncertainties
- Test thoroughly
Reviewer Prompts
Section titled “Reviewer Prompts”"Review this diff for security issues""What edge cases might this miss?""Does this match patterns in [existing file]?""What would a senior dev question here?"PR Template Addition
Section titled “PR Template Addition”### AI Assistance🤖 Generated with Claude Code: Yes/No
### Areas for careful review- [List uncertain parts]6. PITFALLS — Common Mistakes
Section titled “6. PITFALLS — Common Mistakes”| ❌ Mistake | ✅ Correct Approach |
|---|---|
| ”AI wrote it, must be correct” | AI code needs MORE scrutiny, not less |
| Reviewing only code quality | Check: does it solve the RIGHT problem? |
| Submitting code you don’t understand | Rule: explain it or don’t submit it |
| Only Claude reviewing Claude | Human judgment required. AI assists, doesn’t replace. |
| No disclosure of AI assistance | Always flag AI-assisted PRs with 🤖 |
| Skipping edge case testing | AI misses implicit edge cases. Test them. |
| Same review rigor as human code | AI code has different failure modes. Adapt. |
7. REAL CASE — Production Story
Section titled “7. REAL CASE — Production Story”Scenario: Vietnamese e-commerce company, major production incident.
What happened:
- Developer used Claude to implement payment retry logic
- Code looked clean, passed tests
- Reviewer approved quickly — “looks professional”
- Production: race condition caused double charges
- Cost: ₫200M in refunds + customer trust damage
Root cause: Tests didn’t cover concurrent requests. AI-generated code had subtle race condition that looked correct.
Protocol changes implemented:
- AI-assisted PRs require explicit 🤖 label
- Added AI-specific review checklist to PR template
- Author must document “areas of uncertainty”
- Reviewer must ask “can you explain lines X-Y?”
- Critical paths (payment, auth) require 2 reviewers + manual edge case testing
Result: No AI-related incidents in 6 months since protocol adoption.
Quote: “AI makes code that looks right. Our job is to verify it IS right.”
Next: Module 10.4: Knowledge Sharing →