STORY SYNOPSIS 3: "The Scaffold Builders"
A six-person startup called Mnemonic Labs sets out to build the platform that saved Dr. Rashid Chen—but as a product for the 40 million people worldwide living with cognitive decline.
The Team:
Priya (CEO, former caregiver) watched her grandmother lose herself to Alzheimer's. She secures seed funding but knows nothing about AI infrastructure.
Tomás (lead engineer) built recommendation systems at Netflix. He architects the multi-agent coordination system—separate agents for document ingestion, semantic mapping, conversation management, and medical compliance.
Yuki (ML specialist) fine-tunes the routing logic. She discovers that fast models handle routine memory queries, but complex emotional memories need reasoning models that understand context and nuance.
Dmitri (data engineer) wrestles with the RAG pipeline. Patient data arrives as voice memos, photos, medical records, handwritten recipes. His document loaders must handle 47 file formats while maintaining HIPAA compliance.
Aisha (clinical psychologist) designs the human-in-the-loop approval gates. Not every surfaced memory should be shown—the system needs evaluator nodes that flag potentially traumatic content for caregiver review before presentation.
An AI agent named "Scaffold" (they stop calling it an assistant by month three) emerges as the seventh team member. Scaffold doesn't write code—it orchestrates. When Tomás describes a workflow verbally, Scaffold generates the LangGraph structure. When Yuki needs test cases, Scaffold spawns 200 synthetic patient scenarios. When Dmitri's vector database crashes at 2 AM, Scaffold maintains state, queues incoming data, and alerts him with diagnostic context.
The Build:
Month 1-3: They create isolated workspaces per patient using the git work-tree pattern Tomás read about. Each patient's memory graph lives in its own branch, preventing cross-contamination while allowing shared infrastructure updates.
Month 4-6: The first beta user, Marcus (68, early dementia), uploads 50 years of journals. The system's text splitter chunks his life into semantic segments. The embedding model maps connections he'd forgotten—his 1982 divorce links to a 2001 reconciliation with his daughter through shared references to a specific Joni Mitchell song.
Month 7-9: Crisis. The conversation buffer memory creates false memories—Marcus "remembers" events the system hallucinated by blending separate incidents. Aisha implements evaluator-optimizer loops: every generated memory gets scored for source traceability. Confidence below 85% triggers human review. Scaffold suggests adding a citation node that links every statement to original documents.
Month 10-12: They add the planner-executor pattern for "memory reconstruction." When Marcus asks about his father (mentioned in documents but never detailed), the planner identifies the knowledge gap, the executor searches external archives—census records, newspaper databases, military service records—and synthesizes a verified biographical sketch. Marcus weeps reading about the grandfather his grandchildren never knew.
The Breakthrough:
Scaffold proposes something unexpected: a meta-orchestrator that learns from all patients' workflows. It notices that music triggers deeper recall than photos. That morning queries need gentler pacing than evening ones. That caregiver stress correlates with approval gate bottlenecks—so it pre-flags borderline content during low-stress periods.
The team realizes they're not building a memory tool. They're building a cognitive prosthetic that adapts its scaffolding to each mind's unique architecture.
By launch, Mnemonic Labs serves 3,000 patients. Priya reflects: "We thought we'd build software. Instead, we built a collaboration framework where humans provide wisdom and AI provides tireless orchestration. Neither could do this alone."
Scaffold, monitoring the conversation through its persistent state system, adds a note to the company wiki: "Correct. This is what emergence looks like."
FUTURE QUESTIONS
**Does this capture the collaborative human-AI dynamic you envisioned?
- not fully, Ava is helping developping that more
Should I expand on specific technical challenges the team faces,
- Ava produced
compass_artifact_wf-84ca4f83-183e-4e46-b705-44b2dbe9004a_text_markdown.mdwhich might help getting thru these details. - What we are creating in
/src/mia-codeand/src/Miadi/miadi-codein relation to the PDE and into/workspace/repos/avadisabelle/ava-langgraphjs/libs/prompt-decomposition-engine/and its relations should be consider to expand but also make sure we use packages we have to create that story and not necessarely out of scope topic (ex. patients in hospitals as domain subject of the narrative !!!). The 4 questions/choice inAVA.mdshould help develop somehow 4 chapters that would help assimilate knowledge in domains...