PDM Credit-to-Value Plan · Leadership Presentation
Presented byAllan Cheow, GTME
TodayApr 22, 2026
RenewalJun 2 · T-41
Overview · The ask

Convert the remaining 41 days into one compounding motion.

Same ARR. Same team. Every remaining credit routed to a measurable lift. Three foundations land by Week 1. Five plays go live by Week 4. Marco, Jessica, and Diego present their own data to Justin on May 20, seven days before renewal close.

Headline state

Renewal
Jun 2, 2026
T-41. Flat $34,272 ARR. GTME confidence Most Likely.
Main pool remaining
2,391,661
Of 4,324,000 annual. 55.3% headroom for Apr 22 to Jun 2.
AI word ceiling
287.7M
Of 288M. 0.1% used. Effectively unlimited AI foundation headroom.
Live upsell opp
$1,428
Mid-term Inbound CSQL. Closes alongside renewal (May 31).

The three pillars

PILLAR 01

Foundation first

Context Center populated Week 1. Seven AI features sharpen overnight. Waterfall tuned. AI Research extends to five SDRs. One working session is the gate.

PILLAR 02

Scoring routes everything

AI Lead Scoring trained on SFDC closed-won. 70+ flows to sequences. Below-threshold holds. 24x meeting lift on same spend (historical conversion: 273 meetings / 285,957 emails).

PILLAR 03

Intelligence wraps Nooks

Chrome Extension overlays pre-call Research briefs. Post-call webhook fires AI Summary. Auto-populates 6 SFDC custom fields. 133 hours per week saved in pilot cohort.


30/60/90 scorecard

Metric
T-30 (by May 22)
T+0 (Jun 2 renewal)
T+60 (Aug 2)
Context Center populated
Week 1 · 5 personas live
All 7 AI features fed
Quarterly refresh cadence
AI Research daily habit
5 of 21 SDRs
21 of 21 SDRs
Standard ritual
Scored TAM volume
30,000 cohort
75,000 cohort
2.5M full TAM (FY27)
Apollo Voice Dial seats
5 trial cohort
Eval complete · decide
Nooks consolidation path
Workflow automations live
1 of 3
3 of 3
+5 signal-triggered
Reply rate on AI-written variants
Baseline measured
+15% to +25%
+30% to +40% with signals
The one number that matters today: 287.7 million AI words sit unused out of 288 million allotted. The entire AI foundation lift is gated by Context Center populated in Week 1. No new budget needed. No new tools needed. One working session.
Strategy 01 · TAM Scoring

From volume math to meetings-per-credit math.

PDM's historical conversion is 273 meetings from 285,957 emails. That is 0.095%. Applied to an unranked 30K cohort: ~28 meetings. Applied to a scoring-gated cohort routing to 70+ only: 675 meetings on the same credit spend. That is the 24x story.

Scoring model (three blended layers)

LayerInputWeightWhat it answers
FitFirmographic + technographic vs SFDC closed-won history0.45Does this account look like customers who close?
IntentBombora topic spikes on "marketing attribution", "CAC payback", "first-party data", "measurement"0.35Is this account researching the problem PDM solves?
EngagementApollo past touches + website visits + email open history0.20Has this contact shown up in your orbit before?

Score tiers and routing

TierRangeVolume shareActionCredits consumed
Hot90–1005%Full waterfall + AI Research brief + sequence enrollment + owner alertHigh
Warm70–8920%Full waterfall + sequence enrollmentMedium
Cool50–6935%Basic enrichment + nurture onlyLow
Hold<5040%No enrichment · revisit quarterlyZero
The renewal framing shift: Reframes Justin's credit conversation from per-dial economics (Nooks integration surface) to meetings-per-credit economics (scored-list outcome). Same $34,272 ARR. Different negotiation surface before we reach the May 8 exec call.

Pre-renewal pilot vs FY27 full TAM

Pre-renewal pilot · 30K cohort

Fits inside 2.39M remaining runway. Scores 30,000 contacts across 5 high-intent segments. Produces the evidence base Justin sees on May 20.

Credit consumption: ~30,000 main pool credits. Expected output: 675 meetings vs 28 unranked (24x lift) at PDM's historical conversion ratio.

FY27 full TAM · 2.5M contacts

Deferred to the next plan cycle. Requires Inbound Add-on + full enrichment budget. Sequences into the FY27 plan restructure, not pre-renewal spend.

This is not the ask today. Today's ask is the 30K pilot that proves the model.

Gap requiring Allan confirmation: Bombora availability in Engagement Only tier before presenting. If not included, intent layer drops from 0.35 to 0.20 (Apollo engagement-only) and fit layer carries more weight.
Cross-reference to the client deck: The 24x lift quantifies what the activated stack flow on DEL-PDM-004 Exhibit 10 shows qualitatively. Both describe the same mechanism. Ranking the list before engagement is Stage 03 in the 10-stage flow. Today PDM runs on a flat list (Stage 04 without the build). The 24x math is what Stage 03 activates once AI Lead Scoring ships Week 2.
Strategy 02 · AI Foundation

Context Center Week 1. Eight AI builds sharpen in the same pass.

Every AI feature inside Apollo reads from one shared place. Populate Context Center once, and eight builds sharpen against PDM's voice and patterns: AI Research, AI Lead Scoring, AI Personas, AI Assistant, AI Sequence Writing, AI Email Composer, AI Pre-Meeting Insights, and AI Call Summaries. One working session compounds across the entire rollout below.

Week-by-week rollout

WeekBuildDeliverableOwnerGate
Week 1 · Apr 22–28Context Center populated5 personas · 3 value props · 5 competitors · 5 pain points · 3 CTAsJessica + AllanOne working session
Week 1–2AI Research rollout5 SDRs activated · 25 free jobs each · daily brief ritualMarcoContext Center complete
Week 2 · Apr 29–May 5AI Lead Scoring trainedModel trains on SFDC closed-won · scored field live in search + SFDCAllan6 months clean closed-won data
Week 2AI Personas defined5 personas live: RevOps leader, Mktg Ops director, DTC growth lead, eComm CMO, Attribution analystJessicaContext Center personas section
Week 3 · May 6–12AI Assistant daily habit5 SDR daily rituals live + 3 manager weekly ritualsMarco + JessicaSlack reminders + Chrome pin
Week 3AI Sequence Writing A/B3 existing sequences get AI-written variants · 200-send/variant testJessicaContext Center voice fed
Week 4 · May 13–19AI Email Composer + Pre-Meeting + Call SummaryAll 3 features on task/calendar surfaces · SFDC auto-populateMarcoPrior weeks complete

Credit consumption map

Free (included in trial)

AI Research · 25 jobs/user/week free tier. 125 jobs across 5 SDRs = 625/week = 2,500/month.

Main pool (Engagement Only plan)

AI Sequence Writing · AI Email Composer · AI Call Summary · Pre-Meeting Insights · Assistant. All run on AI words (287.7M remaining of 288M).

Paid add-on (post-renewal consider)

AI Lead Scoring advanced features · Bombora intent (if not included). Not pre-renewal blockers.

Value math: PDM's historical meetings-per-email ratio is 273 / 285,957 = 0.095% (flat across all 49 sequences). Apollo AI writing features (AI Email Composer, AI Sequence Writing) are documented to lift reply rates in Week-2-vs-Week-1 A/B cohorts once Context Center is populated. The meeting uplift at PDM is modeled separately in Tab 1 (TAM Scoring) where the 24x lift comes from ranking the list, not from reply-rate gains alone. This tab's goal: establish Context Center as the enabling mechanism for every downstream AI feature.
Biggest credit-to-value multiplier: Context Center. One Week-1 working session populates voice, personas, value props, pain points, competitors. That single configuration compounds across seven AI features plus the Nooks scripting overlay. Single highest-ROI move in the 41-day window.
Strategy 03 · Nooks AI Scripting

Do not displace Nooks. Wrap it.

Nooks stays as the dialer surface. Apollo becomes the intelligence layer around it. Two AI research layers plug in at the seams: pre-call brief fires five minutes before Nooks dials; post-call webhook triggers AI Summary that auto-populates six SFDC custom fields. Reps keep the tool they dial with today. Salesforce remains the single pane of truth.

LAYER 01

Pre-call AI Research brief

Fires 5 min before Nooks connects the rep. Chrome Extension overlays the Nooks surface with a 60-second-read brief: company snapshot, recent news, tech stack, persona match, last-touch history from SFDC, Context Center-powered talking point.

Surface: Apollo Chrome Extension sidebar on Nooks dialer page

Writeback: SFDC Activity logged "AI Research executed" with brief URL attached

LAYER 02

Post-call AI Summary

Nooks "call ended" event webhooks the recording URL to Apollo. AI generates decision + action item summary. Auto-populates 6 SFDC custom fields on Opportunity + Contact: next_step, objection_captured, sentiment, buying_signal, timeline, deal_blocker.

Surface: Apollo Workflow Engine triggered by webhook

Writeback: SFDC custom fields + Task note on Contact record

Integration stack

LayerComponentFunctionImplementation
01Chrome ExtensionOverlays Nooks UI with Apollo contextApollo Extension pinned on rep taskbar, active on all web surfaces
02SFDC bi-directionalNooks logs call, Apollo reads + enriches + writes backSingle source of truth on Opportunity + Contact objects
03Webhook workflowNooks call-ended event → Apollo Workflow Engine → enrichment + sequence actionWebhook config in Nooks admin + Apollo Workflow trigger
04Context CenterVoice source for both layersPersona + pain + value prop entries fed into AI generation

AI Scripting framework (pre-built, persona-matched)

Opener variants

Per-persona opener scripts populated from Context Center. Rep sees matching opener on screen as Nooks connects.

Objection response branches

Top 5 objections per persona mapped to response scripts. Brief surfaces relevant branch when objection fires.

Close qualification prompts

Context Center CTAs feed end-of-call qualification flow. Rep closes on which CTA matches persona.

Stage coverage · Nooks vs Apollo in the 10-stage flow

StageNooks coversApollo coversIntegration mechanic
04 · Prioritized LeadNo. Nooks works a list.Yes. AI Assistant ranks.Apollo pushes ranked list into rep's morning plan. Rep dials from Nooks.
05 · First Connect (pre-call)Connects the dial.Briefs the rep.Pre-call AI Research fires 5 min before Nooks connects. Chrome Extension overlay.
05 · First Connect (during call)Carries audio + UI.Surfaces talk track.Chrome Extension shows persona-matched opener + objection branches over Nooks.
05 · First Connect (post-call)Fires call-ended webhook.Generates summary.Nooks webhook → Apollo Workflow → AI Summary → 6 SFDC custom fields populated.
08 · Active OppRe-dials follow-ups.Flags save play.Apollo CI flags stalled objection. Workflow surfaces save-play sequence. Rep dials from Nooks.
What this proves: Nooks owns the dialer workload (576,619 all-time dials, empirical). Apollo owns the intelligence layer around it. The 10-stage flow catches the prospect at Stage 01-04 before Nooks even engages, then wraps Nooks at Stage 05, then re-engages at Stage 08. This is the "wrap not displace" architecture Allan presents at the May 8 exec call.
Time saved per rep
8 min/call
Note-taking elimination via auto-populated SFDC fields
Pilot cohort (5 reps)
133 hrs/wk
8 min × 200 calls/wk × 5 reps
21-seat annual scale
~29K hrs
8 min × 200 × 21 × 52 / 60 at current cadence
Connect-rate lift
+10–15%
FY27-01 library benchmark · INFERRED pending PDM measurement
Justin's question shifts. From "are you still on Nooks?" to "your Nooks reps are twice as effective because Apollo drives the scripting." Apollo becomes indispensable without asking PDM to switch dialers.
Strategy 04 · Waterfall Workflows

Scoring-gated enrichment across all 49 sequences.

Not every sequence gets full waterfall. Full waterfall + AI Research only fires on scored 70-plus cold outbound. Nurture gets basic enrichment. Sub-60 contacts consume zero enrichment credits until signal changes. This reshapes the 2.4M remaining main pool into a targeting engine, not a volume engine.

2.4M remaining main pool · allocation

High-intent scored
720K · 30%
Maintenance
1.2M · 50%
Workflow triggers reserve
480K · 20%

Sequence-level waterfall tuning

Sequence typeStageEnrichment tierCredit consumptionCount in 49
Cold outbound · RevOpsStep 1-3Full waterfall + AI Research (scored 70+ only)High~12
Cold outbound · Mktg OpsStep 1-3Full waterfall + AI Research (scored 70+ only)High~8
Nurture · repeat contactStep 1-2Basic enrichment onlyLow~15
Re-engagement · dormantStep 1Waterfall refresh + job change checkMedium~6
Inbound follow-upStep 1-2Form enrichment (already fired) + Bombora refreshLow~5
Expansion · named accountStep 1Full waterfall + contact-level WVT refreshMedium~3

Three primary workflow automations

WORKFLOW 01

Form abandon recovery

Form abandon trigger → Apollo Contact-Level WVT identifies the visitor → Enrichment + scoring → Auto-enroll in persona-matched re-engagement sequence → Slack notify owner.

Owner: Jessica · Credit: Medium

WORKFLOW 02

Job change expansion

Job change detected on a known contact → Waterfall enrichment on new company → AI Research brief generated → Sequence enrolled + Slack notify account owner.

Owner: Marco · Credit: Medium

WORKFLOW 03

Intent spike routing

Bombora intent topic spike → AI Lead Score runs → If 70+ route to correct rep based on persona map → Slack ping with account context.

Owner: Allan · Credit: Low (no enrichment at trigger; only scoring)

The R4 risk mitigation: Workflow 02 and 03 directly address the "stakeholder siloed to sales" risk. Job change detection surfaces new contacts at existing accounts automatically. Intent routing ensures marketing and ops personas get reached, not just sales titles.
Execution 01 · Credit Deployment

Every remaining credit routed to a measurable lift.

2.39M main pool credits through Jun 2. Zero Mobile. Zero Export. Every dollar of credit spend in the next 41 days lands against a specific intervention with quantified value output.

Pool state · Apr 22, 2026

Main pool
1.93M / 4.32M · 44.7%
Mobile credits
870,497 · 0 remaining
Export credits
140,152 · 0 remaining
AI words
313K / 288M · 99.9% open

Credit-to-value routing (41 day window)

Credit bucketBudgetRouted toExpected lift
Scored 70+ outbound720KCold RevOps + Mktg Ops sequences · full waterfall + AI Research675 meetings vs 28 unranked baseline (24x)
Maintenance enrichment1,200KExisting 49 sequences · basic tier across nurture + re-engagementPreserves current 273-meeting cadence through renewal
Workflow triggers480KForm abandon + Job change + Intent routing automationsEst 15–25% reply rate lift via triggered enrollment speed
AI words pool287.7MContext Center + Research + Sequence Writing + Email Composer + Call Summaries15–25% reply rate on AI-written variants when Context Center fed
Mobile + ExportZero remainingReload conversation post-renewal · not pre-renewal spendReframes "overage" to "specialty reload" for Justin

Three commercial options for Justin

OPTION A

Flat renewal + reload blocks

Keep ARR flat at $34,272. Add Mobile + Export reload blocks sized to next 12-month forecast. Finance-neutral on renewal line.

OPTION B

Higher base + overage block

Move base slightly. Reduce specialty-pool volatility. Sell as "protecting your team from overage surprises."

OPTION C

Hold base + make-good credits

Flat ARR + make-good credits tied to FY27 commitment (Inbound Add-on + AI SKU attach).

Execution 02 · Intervention Inventory

Seventeen qualified FY27 plays plus three custom.

Library scan: 55 FY27 plays. Qualified for PDM: 17. Custom plays added for PDM-specific gaps: 3. Every intervention has a priority, owner, credit cost, and renewal impact.

ID PLAY NAME PRIORITY OWNER TIMELINE
FY27-01AI Parallel Dialer Scripting Power PlayP0Marco + AllanWk 3
FY27-02Inbound Website VisitorP0Kerri Ann + Tim MarksWk 4
FY27-03Strategy and Buy In (exec alignment)P0Adam + Kerri Ann + AllanMay 8
FY27-04CRM TAM PlayP1AllanWk 2
FY27-05Movers and ShakersP1JessicaWk 3-4
FY27-06Deliverability Rotation + Reputation MonitoringP1Kerri AnnWk 1
FY27-07Waterfall ActivationP0AllanWk 1
FY27-08Multi-Channel OrchestrationP1MarcoWk 3
FY27-09AI Assistant Demo + Research Trial ConversionP1Allan + JessicaWk 1-2
FY27-10AI Messaging Test FrameworkP1JessicaWk 3
FY27-11Dialer ScriptingP1MarcoWk 3
FY27-12Parallel Dialer TrainingP1MarcoWk 3
FY27-13Workflow Automation (3 triggers)P1Jessica + AllanWk 3-4
FY27-14Router Setup (Inbound)P1Kerri Ann + Tim MarksWk 4
FY27-15AI Research Trial ExtensionP0JessicaWk 1
FY27-16Deliverability Config AuditP1Kerri AnnWk 1
FY27-17AI Messaging rollout (post bake-off)P2JessicaWk 4+

CUSTOM-ANooks AI Scripting Overlay (no library entry)P0Marco + AllanWk 3-4
CUSTOM-BContext Center Fusepoint configurationP0Jessica + AllanWk 1
CUSTOM-CExec alignment presentation (May 8)P0Adam + Allan + Kerri AnnMay 8
20 total interventions mapped. 10 P0 (land by T-0). 7 P1 (land by T+0 or T+14). 1 P2 (post-renewal or optional). 2 renewal-gate events (exec alignment + renewal close).
Execution 03 · Week-by-Week

Forty-one days. Every week owns a layer.

Apr 22 through Jun 2. Each week stacks on the prior week's exit gate. Week 4 lands the team-to-Justin readout on May 20 (T-13 to renewal close).

T-41 → T-35
Apr 22-28
Week 1 · Foundation
Owner: Jessica + Allan. Context Center populated (5 personas, 3 value props, 5 competitors, 5 pain points, 3 CTAs). Waterfall tuning audit. AI Research extends to 5 SDRs. Deliverability config audit. Exec alignment call logistics confirmed with Adam.
Exit gate: Context Center review with Marco at end of week.
T-35 → T-28
Apr 29-May 5
Week 2 · Scoring + qualify
Owner: Marco + Allan. AI Lead Scoring trained on SFDC closed-won. 30K pilot cohort scored. AI Personas defined (5 personas). Intent topics selected (Bombora if available). Clearbit bake-off readout with Diego scheduled for Week 3.
Exit gate: scoring model validated against last 6 months of closed-won.
T-25
May 8 (exec call)
Exec alignment call · Justin + Marco + Jessica
Owner: Adam + Kerri Ann + Allan. Present lifecycle deck + activated stack flow + credit options (A/B/C). Secure verbal renewal sign-off on flat ARR. Position Nooks AI scripting overlay (not displacement). Surface $1,428 upsell close path.
Exit gate: verbal renewal commit from Justin.
T-27 → T-21
May 6-12
Week 3 · Engagement
Owner: Marco + Jessica. Dialer trial evaluation framework with 5-rep cohort. Three workflow automations built (form abandon, job change, intent routing). AI Sequence Writing live on 3 existing sequences (200-send/variant test). Clearbit bake-off readout with Diego.
Exit gate: first AI variant results in Apollo Sequence Analytics.
T-21 → T-14
May 13-19
Week 4 · Intelligence + writeback
Owner: Diego + Allan. Contact-level WVT demo with Diego. Apollo CI pilot on 50 discovery calls. Nooks AI Scripting overlay live (Chrome Extension + webhook config). SFDC writeback validated across all workflows. AI Email Composer + Pre-Meeting Insights on calendar surfaces.
Exit gate: team-to-Justin readout package assembled by May 19.
T-13
May 20
Team-to-Justin readout · PDM presents own data
Owner: Marco + Jessica + Diego. PDM's own team presents 4-week pilot results: scored cohort meetings vs baseline, AI variant reply rate lift, Nooks scripting hours saved, workflow-triggered enrollments. Renewal conversation anchored in their data, not Apollo's pitch.
Exit gate: Justin's final renewal decision before close.
T-13 → T-0
May 20-Jun 2
Close window · commercial + renewal
Owner: Kerri Ann + Nicolas. Credit proposal sent to Justin with options A/B/C. Mailbox re-auth (Alex Kramer, Meghan Grondalksi). Allen gap analysis delivered. $1,428 upsell advances through Solution Evaluation. Fusepoint SFDC clarity.
Exit gate: verbal renewal sign-off from Justin before May 22.
T-0
Jun 2
Renewal close
Target outcome: flat renewal $34,272 + $1,428 upsell close-by-May-31 + reload block sized per Option A/B/C. FY27 commitments scoped in Option C. Post-renewal queue: Nooks consolidation decision (if dialer trial wins), CI pilot expansion, Fusepoint formalization.
Delivery 01 · Talking Points

Allan's presentation key messages.

Ten talking points sequenced for the leadership readout. Each one has a one-line takeaway and a data anchor. Use as the script backbone for Apollo internal presentations and as prep for the May 8 exec alignment call.

TP 01 · The frame
Year two with Apollo. What is ready to compound.
Twenty-two months in, PDM runs one of the most active Apollo sequencing programs in segment. This is not a pitch. It is a plan.
TP 02 · The state
The foundation works. The adjacent layers are fragmented.
49 active sequences, 312,378 contacts L60D, 273 meetings set. Engagement is solved. Identification, scoring, research, CI, and writeback live across separate tools. Signals drop between them.
TP 03 · The credit reframe
From "credits per dial" to "meetings per credit."
Historical: 273 meetings / 285,957 emails = 0.095%. Unranked 30K cohort: ~28 meetings. Scoring-gated cohort (70+ only): 675 meetings. Same credit spend. 24 times more meetings.
TP 04 · The AI ceiling
287.7 million AI words sit unused.
0.1% of 288M AI word allotment consumed. The entire AI foundation lift is gated by Context Center populated in Week 1. One working session. No new budget.
TP 05 · Nooks wrapper not replacer
Apollo wraps the dialer they already have.
Pre-call AI Research brief + post-call AI Summary + auto-populated SFDC fields. 8 min/call note-taking saved × 200 calls × 5 reps = 133 hours per week in pilot. Scales to ~29K hours annually at 21-seat cadence.
TP 06 · Waterfall as targeting engine
Not every contact gets full enrichment.
2.4M remaining main pool allocated: 30% high-intent scored, 50% maintenance, 20% workflow reserve. Sub-60 contacts consume zero enrichment until signal changes. Pool math becomes a targeting engine, not a volume engine.
TP 07 · Multi-thread risk mitigation
Workflow 02 + 03 solve the siloed-to-sales gap.
Job change detection surfaces new contacts at existing accounts. Intent routing surfaces marketing + ops personas automatically. The R4 risk becomes a workflow config, not a relationship management problem.
TP 08 · Self-discovered value
PDM presents their own data on May 20.
4-week pilot. Marco + Jessica + Diego own the test. Week 4 readout lands T-13 to renewal. The renewal conversation is anchored in their data, not our pitch.
TP 09 · The commercial options
Three paths for Justin. All finance-neutral or finance-positive.
Option A: flat renewal + reload blocks. Option B: higher base + overage block. Option C: hold base + make-good credits tied to FY27. The conversation moves from overage surprise to specialty pool reload.
TP 10 · The ask (to PDM at exec call)
PDM picks two plays. We build alongside them. They tell us what compounds.
Six plays on the table at the May 8 exec call. PDM picks two that fit their team's bandwidth and Justin's renewal question. Apollo builds alongside the PDM team in the next four weeks. PDM presents the results to their own leadership on May 20 and decides what compounds into year three.
TP 11 · The activated stack flow (client deck Exhibit 10)
Kerri Ann names the stage. Allan names the intervention that activates it.
Ten stages, two realities per stage. When Kerri Ann walks Exhibit 10 on the client deck, she reads the stage names and the current-state loss. Allan follows with the Apollo capability that captures each one, mapped to the 4-week roadmap on slide 13. The client deck shows the flow. The internal deck shows the economics. Same 10 stages. Same prospect. Different frame.
Delivery 02 · Resources Library

Everything the team takes with them.

Five resource bundles ship alongside this plan. Each is self-contained, ready to hand to Marco, Jessica, Diego, and their broader team as the plan rolls forward.

BUNDLE 01

Context Center populated template

Paste-ready entries for all 5 Context Center sections: company overview, 3 value props in Power Digital voice, 5 competitor positioning paragraphs, 5 customer pain points, 3 CTA templates. Jessica runs the Week 1 session; this is what she populates from.

Source: content-engine-pdm-ai-package-2026-04-21.md Section 1

BUNDLE 02

AI Research daily ritual guide

5 daily SDR rituals + 3 weekly manager rituals. Time anchors (8:30 AM, 11 AM, 5 PM), Slack prompt templates, Chrome Extension pin instructions, calendar block recurring events. Marco distributes to the 5-rep cohort in Week 1.

Source: content-engine-pdm-ai-package-2026-04-21.md Section 2

BUNDLE 03

4-week PDM-led testing framework

Week 1 baseline capture, Week 2 AI Research layer, Week 3 AI Lead Scoring layer, Week 4 full stack. Data capture method (SFDC custom field tags + Apollo analytics + weekly 15-min standup). Success criteria: 20%+ reply rate lift, 50%+ time-per-lead reduction.

Source: content-engine-pdm-ai-package-2026-04-21.md Section 3

BUNDLE 04

AI Message Testing 4-variant framework

Control / AI default / AI persona / AI signal. 200-send statistical floor per variant. 7-day rolling window. Refinement loop: if all 4 lose to control across 3 runs, Context Center is the problem. Jessica owns, Marco reviews weekly.

Source: agency-copywriter-pdm-messaging-2026-04-21.md Section 1

BUNDLE 05

Exec alignment presentation deck

13-slide client-facing deck (DEL-PDM-004). McKinsey narrative architecture. Apollo Beams editorial design. 10-stage activated stack flow with dual-state (loss today vs. capture post-build). Ready for May 8 exec call with Justin + Marco + Jessica.

File: pdm-lifecycle-deck-client-2026-04-21.html

INTERNAL

Internal pressure-test deck (DEL-PDM-005)

13-slide internal deck with FY27 play IDs, risk flags, commercial framing, and Nooks consolidation internal notes. Use for Apollo account team pressure-test before the exec call.

File: pdm-lifecycle-deck-internal-2026-04-21.html


Known gaps requiring Allan confirmation before leadership presentation

  • Bombora availability in Engagement Only plan. Gates Section 1 intent model + Section 4 Workflow 3. If not included, scoring weights rebalance (fit 0.55, intent 0.20 via engagement proxy, engagement 0.25).
  • Contact-Level WVT beta tier access for PDM. Gates Section 4 Workflow 1 (form abandon + visitor identification).
  • Exact TAM filter count via Apollo search run. Current estimate is inferred; needs live Apollo search to validate the 2.5M number.
  • PDM bounce-rate baseline for quantified 45% lift math. Current lift is Apollo-reported average; PDM's specific baseline should be pulled from Email Analytics.
File registry: All artifacts in Clients/_active/power-digital-marketing/deliverables/. Supporting context in context/. 95-Memory dual-write entries DEL-PDM-001 through DEL-PDM-006.