Skip to main content

Claude Code + Subagents: Orchestrate an Entire Product Team

Deploy 25+ agents in parallel waves, simulating PM, engineers, QA, and red team—all sharing a single knowledge graph.

The Hero Demo: 68 Minutes, 25 Agents, Real Production Work

On January 21, 2025, we ran a comprehensive demonstration of multi-agent orchestration using maenifold. This wasn't a toy example—it was real production work that discovered and fixed critical bugs.

Demo Stats

Duration: 68 minutes total
Agents Deployed: 25 agents across 2 phases
Issues Fixed: 4 bugs/features
Test Coverage: 2,031 lines of tests added
Knowledge Graph Impact: 171,506 new concept relations

Part 1: PM-lite Protocol (Discovery & Testing)

Duration: 28 minutes
Agents: 12 agents across 4 waves
  • Wave 1: 3 agents (core functionality)
  • Wave 2: 4 agents (integration testing)
  • Wave 3: 3 agents (edge cases)
  • Wave 4: 2 agents (verification)
Coordination: Sequential thinking only (no rigid workflow)
Result: Discovered critical move operation bug, 85% test success rate

Part 2: Agentic-SLC Workflow (Issue Remediation)

Duration: 40 minutes
Agents: 13 agents total
  • 7 implementation agents across 3 waves
  • 6 discovery/validation agents
Coordination: 17-step workflow with quality gates
Result: Fixed 3 issues, comprehensive regression tests added

Prerequisites

  • Claude Code installed and configured
  • Understanding of multi-agent patterns
  • Optional: Read demo artifacts first for context

Setup for Multi-Agent Orchestration

1. Single Claude Code instance as orchestrator

# Set shared knowledge location
export MAENIFOLD_ROOT="~/project-sprint"

# Start Claude Code
claude-code

2. Orchestrator creates plan

"Create a sequential thinking session for orchestrating test coverage. Plan agent waves: discovery → implementation → verification → red team."

3. Deploy agent waves (using Claude Code subagents)

"Launch 4 parallel subagents:
- TST-001: Test API endpoints
- TST-002: Test CLI parity
- TST-003: Test error handling
- TST-004: Performance benchmarks

Each agent should write findings to memory with [[test-result]] tags."

4. Agents work in parallel, writing to shared graph

  • Each agent has own sequential thinking session
  • All write to same memory:// location
  • Knowledge graph automatically links related concepts
  • Orchestrator monitors progress via RecentActivity

5. Orchestrator synthesizes results

"Search memories for [[test-result]] from the last 10 minutes. Build context and create summary report."

Walkthrough: Simplified PM-lite Demo

Let's recreate a scaled-down version of the demo (4 agents instead of 12):

Step 1: Orchestrator Creates Plan

Claude Code Prompt (you, as PM):

"I need to test our search functionality. Create a plan for 4 test agents: Agent A (hybrid search), Agent B (semantic search), Agent C (full-text search), Agent D (edge cases). Create a sequential thinking session to track orchestration."

Step 2: Launch Wave 1 (Parallel Testing)

Claude Code Prompt:

"Launch 4 subagents in parallel. Each should: 1) Run assigned tests, 2) Write results to memory with [[test-result]] tag, 3) Document any bugs with [[bug]] tag. Use agent names: TST-A, TST-B, TST-C, TST-D"

What Happens:

  • Claude Code spawns 4 parallel agents (via Task tool)
  • Each agent runs independently
  • Each writes to memory://testing/ folder
  • All agents share the knowledge graph
  • You see 4 progress indicators running simultaneously

Step 3: Monitor Progress

Claude Code Prompt:

"Check recent activity. Show me what each agent has completed."

Recent activity:
- TST-A: Completed hybrid search tests (2 min ago) ✅
- TST-B: Found edge case bug in semantic search (1 min ago) ⚠️
- TST-C: Full-text tests passing (1 min ago) ✅
- TST-D: Edge case testing in progress...

Step 4: Synthesize Results

Claude Code Prompt:

"Search memories for [[test-result]] and [[bug]]. Build context and create summary report."

maenifold Automatically:

  1. Searches all agent memories
  2. Finds test results + bug reports
  3. Builds concept graph showing relationships
  4. Generates comprehensive report

Report Includes: Test coverage matrix, pass/fail summary, bug details with links to agent reports, recommendations for next steps

Real Demo Artifacts

We've preserved the complete demo artifacts showing every step:

Location: /Users/brett/src/ma-collective/maenifold/assets/demo-artifacts

Part 1: PM-lite Protocol (part1-pm-lite/)

  • orchestration-session.md: Real-time orchestration thoughts (session-1758470366887)
  • test-matrix-orchestration-plan.md: 50+ test cases, 4-wave architecture
  • E2E_TEST_REPORT.md: Final results showing 85% success rate
  • demo-final-report.md: Comprehensive report from VRFY-02 agent

Part 2: Agentic-SLC Workflow (part2-agentic-slc/)

  • agentic-slc-thinking-session.md: Sprint progress (session-1758474798193)
  • agentic-slc-workflow-session.md: Complete 17-step workflow (includes embedded RTM.md content)
  • RTM.md: Requirements traceability matrix (27 atomic items) - embedded in workflow session, not standalone
  • sprint-specifications.md: Detailed specs with line numbers
  • implementation-plan.md: 7 agents, 3 waves, 22 tasks
  • code-justification-report.md: Justification for every line of code
  • sprint-retrospective.md: Learnings and metrics
# Browse the artifacts:
cd /Users/brett/src/ma-collective/maenifold/assets/demo-artifacts
cat README.md # Start here for overview
cd part1-pm-lite
cat orchestration-session.md # See real orchestration thoughts
# Note: RTM.md content is embedded in part2-agentic-slc/agentic-slc-workflow-session.md
# Look for the RTM section within the workflow session file

Key Patterns from Demo

Pattern 1: Adaptive Wave Deployment

Instead of launching all 12 agents at once, the PM orchestrator:

  1. Launched Wave 1 (4 agents): Core functionality tests
  2. Monitored results via RecentActivity
  3. Discovered critical bug in Wave 1 results
  4. Adapted Wave 2 plan based on findings
  5. Launched Wave 2 (3 agents): Bug investigation + regression tests
  6. Continued adaptive deployment through Waves 3 & 4

Result: More efficient than rigid plan, found critical issues early

Pattern 2: Shared Sequential Thinking

All 12 agents contributed to SAME sequential thinking session:

  • PM wrote orchestration thoughts
  • Agents wrote progress updates
  • Verification agents wrote validation results
  • Red team wrote security concerns

Result: Complete audit trail, single source of truth for entire sprint

Pattern 3: Knowledge Graph Synthesis

After all agents completed work:

  • 171,506 new concept relations added to graph
  • BuildContext on [[test-coverage]] revealed connections across all agent work
  • Visualize generated architecture diagrams from agent findings
  • SearchMemories instantly retrieved related work across 25 agents

Result: Knowledge persists beyond demo, available for future work

Next Steps

  • Start with single-agent workflows

    Try UC3 first—you don't need multi-agent to benefit

  • Explore Sequential Thinking

    Foundation for both single and multi-agent workflows

  • Browse demo artifacts

    See real orchestration patterns and agent coordination strategies