← Back to Articles & Artefacts
artefactssouth

Claude Code Session Management: Quick Navigation

IAIP Research
jg-260110-osm-claude-code-a9af9be8-df86-4285-a367

Claude Code Session Management: Quick Navigation

Three documents provide everything needed to understand the problem and build the solution.


Document Overview

Document 1: Overview & Architecture

01-Session-Mgmt-Overview.md

What it covers:

  • What Claude Code stores (filesystem layout, JSONL structure)
  • The extraction problem (5 major gaps)
  • Current tool ecosystem (2025-2026)
  • 9-dimensional evaluation framework for session systems
  • Fork detection algorithm
  • Implementation roadmap (high-level)
  • Known limitations and open questions

Read this if: You need to understand the full problem, current tools, and what "good" session management looks like.

Key sections:

  • Part 1: What Claude Code Actually Stores
  • Part 2: The Extraction Problem
  • Part 3: Current Tool Ecosystem (Jan 2026)
  • Part 4-9: Complete 9-dimensional framework
  • Part 5-7: Fork detection, roadmap, architecture decisions

Time to read: 30-45 minutes


Document 2: Implementation Code

02-Implementation-Code.md

What it covers:

  • 7 working Python classes (copy-paste ready)
  • SessionMetadataExtractor (parse sessions from disk)
  • ForkDetector (find parent-child relationships)
  • SessionIndexDB (SQLite indexing and search)
  • SessionMarkdownExporter (human-readable output)
  • CLI command structure (proposed)
  • SessionTreeVisualizer (Graphviz output)
  • Complete workflow example

Read this if: You're ready to start coding or need to understand how each component works.

How to use:

  1. Copy Section 1 (SessionMetadataExtractor) into your project
  2. Copy Section 2 (ForkDetector) to detect forks
  3. Copy Section 3 (SessionIndexDB) for searching
  4. Copy Section 4 (SessionMarkdownExporter) to export
  5. Copy Section 5-7 for visualization and CLI

Testing: All code is designed to work standalone. Each class has usage examples.

Time to implement: 2-3 hours to integrate all sections


Document 3: Evaluation Survey

03-Evaluation-Survey.md

What it covers:

  • 9-dimensional scoring framework (0-5 per dimension)
  • Scoring guidance for each dimension
  • Red flags and guiding questions
  • Viability interpretation guide (0-5 overall score)
  • Comparison template for evaluating multiple systems
  • Example scoring (existing tools vs. proposed solution)
  • Decision framework (which tools to adopt/build)

Read this if: You need to evaluate existing tools or ensure your build meets requirements.

How to use:

  1. Print or copy the scoring template
  2. For each system, score all 9 dimensions
  3. Calculate overall viability
  4. Use decision framework to proceed

Key insight: No existing tool scores above 3/5. This justifies building custom solution.

Time to use: 30 minutes per system evaluated


Document 4: Weekly Roadmap

04-Roadmap-Weekly.md

What it covers:

  • 4-week sprint (40 hours total)
  • Phase 1 (Week 1): Foundation - extraction, indexing, export
  • Phase 2 (Weeks 2-3): Discovery - CLI tool, visualization, search
  • Phase 3 (Week 3-4): Production - testing, packaging, documentation
  • Week-by-week breakdown
  • Definition of done for each task
  • Success metrics
  • Risk mitigation

Read this if: You're planning the build and need a timeline.

How to use:

  1. Use Phase 1 for immediate (6 hours) gains
  2. Use Phase 2 to make it usable (12 hours)
  3. Use Phase 3 to make it production-ready (24 hours)
  4. Or split across team members if parallelizable

Checkpoints: End of each week has clear go/no-go criteria

Time commitment: ~10 hours/week for 4 weeks


How to Use These Documents Together

Scenario 1: "I need to understand the problem"

  1. Read Part 1 & 2 of Document 1 (15 min)
  2. Read Part 3-4 of Document 1 (20 min)
  3. Skim Document 4 for timeline (5 min) → Total: 40 minutes

Scenario 2: "I'm evaluating whether to build or buy"

  1. Read Document 1 completely (45 min)
  2. Read Document 3 completely (30 min)
  3. Score existing tools using Document 3 template (30 min per tool) → Total: 75+ minutes + scoring time

Scenario 3: "I'm starting development today"

  1. Read Document 1, Part 5-7 (fork detection, decisions, roadmap) (20 min)
  2. Read Document 2 completely (30 min)
  3. Read Document 4, Phase 1 section (15 min)
  4. Start copying code from Document 2 (2-3 hours) → Total: 3.5-4 hours to begin coding

Scenario 4: "I want the complete roadmap"

  1. Read Document 1 completely (45 min)
  2. Read Document 2, Section 1-4 (30 min)
  3. Read Document 4 completely (30 min)
  4. Follow week-by-week tasks → Total: 105 minutes + 40 hours implementation

Key Concepts Across Documents

Session

A complete conversation with Claude Code in one directory. Stored as ~/.claude/projects/-home-user-project/conversation.jsonl.

Fork

When you edit a previous message and continue from that point. Creates a branch in the conversation tree. Currently not tracked by Claude Code.

DAG (Directed Acyclic Graph)

A data structure representing session forks. Parent sessions can have multiple children. Used to track lineage.

Metadata

Session information: path, created date, message count, token usage, tools used. Extracted from JSON structure.

Full-Text Search (FTS)

SQLite feature for fast keyword search. Enables searching across 100K+ messages in <100ms.

MCP (Model Context Protocol)

Protocol for agent-to-agent communication. Allows Claude to query session data directly.

Viability Score

Overall assessment of a session management system (0-5). Average of all 9 dimensions.


Quick Decision Tree

``` Start: Do you want to manage Claude Code sessions?

ā”œā”€ YES: Read Document 1, Part 1-2 │ └─ "Do existing tools solve this?" │ ā”œā”€ YES: Use existing tool + read Document 3 to evaluate │ └─ NO: Proceed to build │ └─ "Ready to build?" │ ā”œā”€ YES, immediately: Read Document 2 + 4, Phase 1 │ └─ YES, planning first: Read all Documents │ └─ NO: Read Document 3 (evaluation framework is still useful) ```


Specific Questions & Where to Find Answers

QuestionDocumentSection
What does Claude Code store?1Part 1
Why is this a problem?1Part 2
What tools exist?1Part 3
How do I evaluate a tool?3All dimensions
How do I detect forks?1Part 5
What code do I use?2Sections 1-4
How long will this take?4Overview + breakdown
What's the architecture?1Part 7
What are the limitations?1Part 8
What should I ask my team?1Part 9
How do I visualize forks?2Section 6
What CLI commands should I build?2Section 5
Is this production-ready?4Checkpoints

File Sizes & Completion Times

DocumentLinesWordsEst. Read Time
01-Overview500+8,000+45 min
02-Implementation600+7,000+30 min (skim) / 2-3 hr (code)
03-Survey450+6,500+30 min
04-Roadmap400+5,500+30 min (overview) / 40 hr (execution)
Total1,950+27,000+~2 hours reading + 40 hours building

Next Steps

Immediate (30 minutes)

  • Read Document 1, Parts 1-3
  • Understand the problem and current landscape

Short-term (This week)

  • Read Document 1 completely
  • Read Document 3 (if evaluating tools)
  • Decide: build or use existing

Medium-term (This sprint)

  • Read Document 2 and 4
  • Set up development environment
  • Start Phase 1 (extraction)

Long-term (Next 4 weeks)

  • Follow Document 4 roadmap
  • Deliver working tool
  • Team adoption and feedback

Contact & Feedback

These documents represent:

  • Academic review of multi-agent session management (2024-2025 literature)
  • Analysis of 5+ existing tools
  • Proposed architecture for claude-session-manager
  • 4-week implementation sprint

If working through this, gaps or corrections can be addressed by iterating on the weekly roadmap.

Recommended approach: Start with Phase 1 (Document 4), then iterate based on actual data and team feedback.


Document Status

  • āœ“ Document 1: Complete overview + framework
  • āœ“ Document 2: Working code, ready to integrate
  • āœ“ Document 3: Comprehensive evaluation tool
  • āœ“ Document 4: Detailed week-by-week roadmap

All documents are ready for immediate use.