APOLLO GTME Resource Routing World Model
LEADERSHIP BRIEF
2026-04-23
Internal · Stephanie + VPs
For Stephanie Ervin and Apollo VP Leadership

Route signals,
not meetings.

880 thousand monthly signal-moments. One substrate. Automated triage. Human attention where it counts.

01
Executive Summary

The intervention model is headcount bound. We can break that.

Four-paragraph SCQA. Grounded in Jason's 83-account BoB extraction from 2026-04-18. Every number traces to source.

Situation

Apollo manages roughly 1,400 paying teams. Every one generates signals daily across deliverability, adoption, workflow, CRM sync, AI usage, and sequence health. The monthly detection surface is approximately 880 thousand signal-moments across 21 signal classes. Today those moments are invisible at portfolio scale.

Complication

The intervention model is headcount-bound. 8 GTMEs, 80 accounts each, 640 teams actively monitored. The remaining 760 teams are operationally blind. Hiring cannot close this at unit economics that hold.

Question

Can Apollo detect every signal automatically, match each one to a prebuilt resource, deliver it on the right surface, and route only the exceptions to a human operator?

Answer

Yes. The Resource Routing World Model is scaffolded end-to-end. 21 signals, 39 segment primitives, 11 JTBD experiments, 3 intervention tiers, one Snowflake substrate. Pilot on 83 accounts confirms detection. Projection: 1.3M to 1.6M dollars of annualized value across retained ARR, released capacity, and expansion conversion.

BoB ARR Scored
$2.28M
83 accounts, 2026-04-18 snapshot
Verified
HIGH ITBD
19
$474K (20.8%) of book ARR
Verified
Plays Ready
76
91.6% of book scored actionable
Verified
Projected ROI
9.9x
Revenue / cost, full rollout year 1
Projected
Key insight

The MEDIUM-ITBD cohort carries the largest ARR share at 40.8% and is currently served the same way as HIGH (manual, GTME-led). Automating this slice is where the model pays.

Sources · 05_itbd_scored_v51.json · merged_portfolio_clean.json · FY27 Intervention Library · see Section 03 for full extraction citations
02
The Opportunity

Every account trips a signal. Most die unheard.

Apollo's managed book generates intervention-worthy signals faster than a human team can hear them. The world model makes hearing cheap.

The visible intervention surface is under 20% of what's already firing.

Annual signal-moments vs current human coverage
Source Extrapolated from 83-account rates to 1,400 managed paying teams, 21 signal classes, 30-day recurrence
Paying teams managed
1,400
Apollo managed book estimate
Assumed
Covered today
640
8 GTMEs × 80 BoB caps
Verified
Operationally blind
760
No active signal review
Projected
Monthly signal-moments
~880K
1,400 × 21 × 30
Projected
03
Current State

The 83-book reality, measured.

Real extraction from Jason De Leon's managed BoB, 2026-04-18 snapshot. Every count below is directly observed. Hover any signal card for the top five accounts by ARR.

Sequences stale or failing56
67.5% of book · $1.86M ARR · 81.6% of book ARR
  1. Domo $164K
  2. Achievers $141K
  3. Stephen Gould $119K
  4. ISI Markets $95K
  5. eLocal $77K
Tier 1 auto-regen draft
Deliverability failing49
59.0% of book · $1.22M ARR · 53.4% of ARR
  1. Achievers $141K
  2. ISI Markets $95K
  3. Solvo $74K
  4. DemandScience $56K
  5. QA Wolf $53K
Tier 1 digest plus Tier 3 forensic
Adoption dropping29
34.9% of book · $835K ARR · 36.7% of ARR
  1. Stephen Gould $119K
  2. eLocal $77K
  3. Solvo $74K
  4. QA Wolf $53K
  5. Pepper $52K
Tier 2 custom nudge
AI usage gap10
12.0% of book · $210K ARR · 9.2% of ARR
  1. MissionSquare $46K
  2. Oracle $35K
  3. VectorBuilder $24K
  4. Mission $19K
  5. Fluent $19K
Tier 1 AI walkthrough
Workflow errors0
Threshold too blunt · surfaces on managed book
Monitor · retune
CRM sync broken0
Not yet scored at field level
Monitor · field-level probe

Medium ITBD is the single biggest unserved slice.

ARR concentration by ITBD level, $K
Source real_data.json · level_counts × arr_by_level · 83 accounts · 2026-04-18

Two signals dominate the failure surface. Both are automatable.

Signal prevalence, accounts affected
Source real_data.json · signal_counts · 2026-04-18
Sources · outputs-reference/05_itbd_scored_v51.json · 09_whitespace_top25.json · 06c_plays_ready.json · extracted 2026-04-22
04
The System

Four layers. One substrate. Every agent queries the same brain.

Dorsey framework applied to Apollo. Raw capabilities feed a unified World Model. Intelligence layer routes. Surfaces deliver. Adapted from Eric Siu's Single Brain implementation, scaled to the GTM book.

04
Surfaces
Where humans interact with the system
Slack digests · #jdl-clyde Email playbooks · Apollo branded In-app nudges · pending Product Notion rollup · Iris backlog
03
Intelligence
Routing logic, tier thresholds, experiment matching
Tier router · 1 · 2 · 3 Experiment matcher · 11 JTBD Composite risk · 3+ signals · 2+ tiers Resource library lookup
02
World Model
Unified SIGNAL_EVENTS substrate in GTME_ANALYTICS
21 signals · 6 tiers Append-only · 365d TTL Schema-drift safe · probe-gated Clustered by snapshot_date + team
01
Capabilities
Raw data + AI + existing Apollo stack
Snowflake ANALYTICS_DB · 18+ tables Looker A360 · Dashboards 1082 + 1084 Notion BoB · Glean · Gmail · Slack Claude Sonnet + MCP agents
Siu's Single Brain principle

"The model isn't the AI. The model is the data structure that lets the AI understand your specific business. It takes months to build because the data has to accumulate." The 83-book is month one. Apollo-wide is month three onward.

05
Segmentation Framework

Every account carries segment bits across four dimensions.

39 primitive segments compose into 150 to 300 materially populated cohorts. Bit-flag composition. Query by intersection. These primitives make the experiment framework addressable.

Dimension 01

Behavior

8 primitives · what they do in product
B1 AI-firstB2 Sequence-firstB3 ProspectorB4 Workflow builderB5 AnalystB6 Integration-leaningB7 Content authoringB8 Static
Dimension 02

Usage

6 primitives · intensity
U1 PowerU2 ActiveU3 OccasionalU4 DormantU5 GoneU6 Trial-window
Dimension 03

Firmographic

14 primitives · who they are
F1-F5 plan tierF6-F9 industryF10-F12 sizeF13 self-serveF14 managed
Dimension 04

JTBD

11 primitives · what they want
J1-J11 drives routing
Composition example

Team 12345 carries B1 + B4 + U2 + F3 + F11 + F14 + J3 + J7 = "AI + Workflow active mid-market Pro managed team doing AI adoption and workflow automation." Targetable cohort. Specific experiment. Specific resource.

Sources · Jason's BoB segment distribution · DIM_TEAMS.PLAN_TIER · AGG_USER_PROSPECTING_SEARCH_DAILY · TEAM_AI_ASSISTANT_DAILY
06
Experiment Portfolio

Eleven JTBD experiments. Every one has a hypothesis, trigger, intervention, metric.

Click any card to expand the full experiment spec. Metrics tagged V are extracted from Jason's book, P are projected, A are assumed pending full-book scoring.

Sources · signals.yaml · extractor 11_signal_extractor.py · mapping to FY27 Intervention Library (68 plays)
07
Tier Routing Model

Three tiers. Defaults to automation. Escalates by rule.

Every fired signal routes through this decision tree. Tier promotions are automatic on failed attempts. Human time is reserved for cases where it matters.

Tier 1

Zero human touch

Automated delivery. Email drafts, Slack nudges, in-app prompts, pre-built resource library lookup. Default path for every signal with a matching resource and a clean fit.

55-65%share of interventions
5 minper delivery
Tier 2

Custom solution

Dynamic generation. Tailored messaging, modified workflows, content-engine drafts. Triggered when 3+ JTBD segments active, Enterprise plan, or Tier 1 failed twice.

25-30%share
20 minHITL gate
Tier 3

GTM human-led

Live enablement, deployment support, hands-on intervention. Fires on composite_risk_score CRITICAL, Enterprise + Expansion, or Tier 2 failure. Meetings reserved for these cases.

10-15%share
45 minavg call

Automating Tier 1 plus Tier 2 alone converts 85% of today's intervention volume from meeting-bound to resource-routed.

Tier distribution: 83-book actual vs 1,400-team projection
Source 83-book actual counts from 05_itbd_scored_v51.json · 1,400-team projection via rate extrapolation · see appendix for methodology
08
Impact Projection

$1.3 million to $1.6 million annualized value. Three scenarios.

Every row below is grounded either in the 83-book extraction or in a stated projection method. Finance should validate the two highest-risk assumptions (baseline churn rate, expansion conversion lift) before the deck converts to funding commitment.

Churn prevention and capacity release are roughly equal. Expansion is small but most sensitive to early wins.

Annual value by scenario, $K midpoint
Source Scenario A retention · B capacity · C expansion · methodology in table below

Monthly intervention volume ramps 8x in year one as coverage extends to the full paying book.

Projected monthly interventions, 12-month ramp
Source PROJECTED · 15% month-1 coverage scaling to 100% by M12 · cadence assumptions 10/4/3 per month by tier

Deliverability repair (J11) carries both the largest cohort and the largest lift. AI adoption (J4) is the most sensitive to early wins.

Per-JTBD impact projection, baseline vs target
Source J4, J5, J8, J9, J11 anchored to 83-book signals · J1, J2, J3, J6, J7, J10 require full-book scoring validation
Scaling economics

Annual cost: approximately $145K (Snowflake credits plus inference plus 0.75 FTE operations). Annual value: $1.44M midpoint. Revenue-to-cost ratio: 9.9x. Payback: roughly 5 weeks. Break-even: 140 teams served, 10% of managed book.

Scenario Cohort basis Low Mid High Confidence
A. Churn prevention
140-210 CRITICAL teams × ARR × 10pp lift
Projected CRITICAL cohort $594K $649K $704K Medium
B. Capacity released
4.3 to 4.7K hours × $150/hr blended
Tier 1 + Tier 2 volumes $651K $680K $709K High
C. Expansion captured
210 J2 teams × +6pp conversion × $8.5K
J2 Adopt-Expand cohort $71K $107K $143K Low to medium
TOTAL $1.32M $1.44M $1.56M
09
Roadmap

Four weeks to v1. One week to productionization gate.

Week 1 is already shipped. The remaining three weeks are scoped, scaffolded, and waiting on one DataOps cycle to unblock the full-book run.

Week 1 · done

Scaffold + signals

  • 11_signal_extractor.py shipped
  • signals.yaml 21-signal catalog
  • schema_probe.py + 6 tier modules
  • DataOps DDL bundle ready
  • RUBRICS + README extended
Gate hit6 existing signals match v5.1 ITBD within 2%
Week 2

Scale + segments

  • Schema probe live on managed book
  • 10 plus 100 team scale test
  • 12_segment_classifier.py + segments.yaml
  • Iris DB schema extended 5 fields
  • Resources 1 and 2 authored
Gatep95 chunk latency under 2s at 100 teams
Week 3

Full book + experiments

  • 1,000 plus full book scale runs
  • 13_experiment_router.py + experiments.yaml
  • composite_risk_score live
  • Resources 3 to 5 authored
  • Slack daily digest running
GateAccuracy 92% plus on all signals; full book under 15 min, 25 credits
Week 4

Leadership readout

  • Company one-pager finalized
  • Stephanie sync review
  • Leadership readout
  • Go / no-go productionization
GateStephanie green-lights productionization funding
10
The Ask

Three small commitments. One fundamentally different operating model.

Everything downstream of these three asks is scaffolded. The decision needed this month is whether to fund the productionization slot in week 5.

What leadership needs to approve

Three allocations this sprint. Everything else is already sitting at the pre-launch line.

Ask 01

DataOps cycle

One cycle to stand up the GTME_ANALYTICS Snowflake schema. Full DDL bundle ready at Douglas/deliverables/snowflake-ddl-gtme-analytics-2026-04-22.sql

Ask 02

Product analyst

Ongoing ownership of the resource library. Signal-to-resource mapping, threshold tuning, authoring pipeline for five gap resources.

Ask 03

Sprint capacity

Week 5 productionization slot for the routing layer plus in-app surface hooks. Hourly scheduled extract to GTME_ANALYTICS and Iris Notion sync.

The payoff

$1.3M to $1.6M in annualized value. 1.5 GTME FTE of capacity released, without adding headcount. 880K monthly signal-moments move from invisible to actionable. The 760 uncovered teams start getting intervention without waiting for a meeting slot.

Sources compiled · 05_itbd_scored_v51.json · 06c_plays_ready.json · 09_whitespace_top25.json · real_data.json · signals.yaml · snowflake-ddl-gtme-analytics-2026-04-22.sql · plan-out-surfacing-this-fancy-naur.md · Forbes 2026-04-10 Jodie Cook · Apollo Beams Brand v1.4