Claude Code Session Management: Quick Navigation
Three documents provide everything needed to understand the problem and build the solution.
Document Overview
Document 1: Overview & Architecture
01-Session-Mgmt-Overview.md
What it covers:
- What Claude Code stores (filesystem layout, JSONL structure)
- The extraction problem (5 major gaps)
- Current tool ecosystem (2025-2026)
- 9-dimensional evaluation framework for session systems
- Fork detection algorithm
- Implementation roadmap (high-level)
- Known limitations and open questions
Read this if: You need to understand the full problem, current tools, and what "good" session management looks like.
Key sections:
- Part 1: What Claude Code Actually Stores
- Part 2: The Extraction Problem
- Part 3: Current Tool Ecosystem (Jan 2026)
- Part 4-9: Complete 9-dimensional framework
- Part 5-7: Fork detection, roadmap, architecture decisions
Time to read: 30-45 minutes
Document 2: Implementation Code
02-Implementation-Code.md
What it covers:
- 7 working Python classes (copy-paste ready)
- SessionMetadataExtractor (parse sessions from disk)
- ForkDetector (find parent-child relationships)
- SessionIndexDB (SQLite indexing and search)
- SessionMarkdownExporter (human-readable output)
- CLI command structure (proposed)
- SessionTreeVisualizer (Graphviz output)
- Complete workflow example
Read this if: You're ready to start coding or need to understand how each component works.
How to use:
- Copy Section 1 (SessionMetadataExtractor) into your project
- Copy Section 2 (ForkDetector) to detect forks
- Copy Section 3 (SessionIndexDB) for searching
- Copy Section 4 (SessionMarkdownExporter) to export
- Copy Section 5-7 for visualization and CLI
Testing: All code is designed to work standalone. Each class has usage examples.
Time to implement: 2-3 hours to integrate all sections
Document 3: Evaluation Survey
03-Evaluation-Survey.md
What it covers:
- 9-dimensional scoring framework (0-5 per dimension)
- Scoring guidance for each dimension
- Red flags and guiding questions
- Viability interpretation guide (0-5 overall score)
- Comparison template for evaluating multiple systems
- Example scoring (existing tools vs. proposed solution)
- Decision framework (which tools to adopt/build)
Read this if: You need to evaluate existing tools or ensure your build meets requirements.
How to use:
- Print or copy the scoring template
- For each system, score all 9 dimensions
- Calculate overall viability
- Use decision framework to proceed
Key insight: No existing tool scores above 3/5. This justifies building custom solution.
Time to use: 30 minutes per system evaluated
Document 4: Weekly Roadmap
04-Roadmap-Weekly.md
What it covers:
- 4-week sprint (40 hours total)
- Phase 1 (Week 1): Foundation - extraction, indexing, export
- Phase 2 (Weeks 2-3): Discovery - CLI tool, visualization, search
- Phase 3 (Week 3-4): Production - testing, packaging, documentation
- Week-by-week breakdown
- Definition of done for each task
- Success metrics
- Risk mitigation
Read this if: You're planning the build and need a timeline.
How to use:
- Use Phase 1 for immediate (6 hours) gains
- Use Phase 2 to make it usable (12 hours)
- Use Phase 3 to make it production-ready (24 hours)
- Or split across team members if parallelizable
Checkpoints: End of each week has clear go/no-go criteria
Time commitment: ~10 hours/week for 4 weeks
How to Use These Documents Together
Scenario 1: "I need to understand the problem"
- Read Part 1 & 2 of Document 1 (15 min)
- Read Part 3-4 of Document 1 (20 min)
- Skim Document 4 for timeline (5 min) ā Total: 40 minutes
Scenario 2: "I'm evaluating whether to build or buy"
- Read Document 1 completely (45 min)
- Read Document 3 completely (30 min)
- Score existing tools using Document 3 template (30 min per tool) ā Total: 75+ minutes + scoring time
Scenario 3: "I'm starting development today"
- Read Document 1, Part 5-7 (fork detection, decisions, roadmap) (20 min)
- Read Document 2 completely (30 min)
- Read Document 4, Phase 1 section (15 min)
- Start copying code from Document 2 (2-3 hours) ā Total: 3.5-4 hours to begin coding
Scenario 4: "I want the complete roadmap"
- Read Document 1 completely (45 min)
- Read Document 2, Section 1-4 (30 min)
- Read Document 4 completely (30 min)
- Follow week-by-week tasks ā Total: 105 minutes + 40 hours implementation
Key Concepts Across Documents
Session
A complete conversation with Claude Code in one directory. Stored as ~/.claude/projects/-home-user-project/conversation.jsonl.
Fork
When you edit a previous message and continue from that point. Creates a branch in the conversation tree. Currently not tracked by Claude Code.
DAG (Directed Acyclic Graph)
A data structure representing session forks. Parent sessions can have multiple children. Used to track lineage.
Metadata
Session information: path, created date, message count, token usage, tools used. Extracted from JSON structure.
Full-Text Search (FTS)
SQLite feature for fast keyword search. Enables searching across 100K+ messages in <100ms.
MCP (Model Context Protocol)
Protocol for agent-to-agent communication. Allows Claude to query session data directly.
Viability Score
Overall assessment of a session management system (0-5). Average of all 9 dimensions.
Quick Decision Tree
``` Start: Do you want to manage Claude Code sessions?
āā YES: Read Document 1, Part 1-2 ā āā "Do existing tools solve this?" ā āā YES: Use existing tool + read Document 3 to evaluate ā āā NO: Proceed to build ā āā "Ready to build?" ā āā YES, immediately: Read Document 2 + 4, Phase 1 ā āā YES, planning first: Read all Documents ā āā NO: Read Document 3 (evaluation framework is still useful) ```
Specific Questions & Where to Find Answers
| Question | Document | Section |
|---|---|---|
| What does Claude Code store? | 1 | Part 1 |
| Why is this a problem? | 1 | Part 2 |
| What tools exist? | 1 | Part 3 |
| How do I evaluate a tool? | 3 | All dimensions |
| How do I detect forks? | 1 | Part 5 |
| What code do I use? | 2 | Sections 1-4 |
| How long will this take? | 4 | Overview + breakdown |
| What's the architecture? | 1 | Part 7 |
| What are the limitations? | 1 | Part 8 |
| What should I ask my team? | 1 | Part 9 |
| How do I visualize forks? | 2 | Section 6 |
| What CLI commands should I build? | 2 | Section 5 |
| Is this production-ready? | 4 | Checkpoints |
File Sizes & Completion Times
| Document | Lines | Words | Est. Read Time |
|---|---|---|---|
| 01-Overview | 500+ | 8,000+ | 45 min |
| 02-Implementation | 600+ | 7,000+ | 30 min (skim) / 2-3 hr (code) |
| 03-Survey | 450+ | 6,500+ | 30 min |
| 04-Roadmap | 400+ | 5,500+ | 30 min (overview) / 40 hr (execution) |
| Total | 1,950+ | 27,000+ | ~2 hours reading + 40 hours building |
Next Steps
Immediate (30 minutes)
- Read Document 1, Parts 1-3
- Understand the problem and current landscape
Short-term (This week)
- Read Document 1 completely
- Read Document 3 (if evaluating tools)
- Decide: build or use existing
Medium-term (This sprint)
- Read Document 2 and 4
- Set up development environment
- Start Phase 1 (extraction)
Long-term (Next 4 weeks)
- Follow Document 4 roadmap
- Deliver working tool
- Team adoption and feedback
Contact & Feedback
These documents represent:
- Academic review of multi-agent session management (2024-2025 literature)
- Analysis of 5+ existing tools
- Proposed architecture for claude-session-manager
- 4-week implementation sprint
If working through this, gaps or corrections can be addressed by iterating on the weekly roadmap.
Recommended approach: Start with Phase 1 (Document 4), then iterate based on actual data and team feedback.
Document Status
- ā Document 1: Complete overview + framework
- ā Document 2: Working code, ready to integrate
- ā Document 3: Comprehensive evaluation tool
- ā Document 4: Detailed week-by-week roadmap
All documents are ready for immediate use.