ADHD Advisor · Growth Analytics · April 2026

Where our margin actually comes from.

An end-to-end view of who our best customers are, where they live, how we find them — and where the biggest untouched margin in the business is sitting.

8,240 paying customers Jan–Oct 2025 sign-ups · avg 10.5mo tenure $2.44M revenue · $0.83M clinical cost · $1.61M margin Gross margin: 66%
How this walkthrough flows
  1. The headline
  2. The two signals that matter
  3. But they're not independent
  4. Three segments — who to target, who to skip
  5. What else is hiding in the data
  6. The payment ladder & why the 2-payment bucket leaves
  7. Not all providers are equal — a look inside Dx+MM
  8. Does this hold for newer customers?
  9. What we're going to do about it
  10. Important caveats
Before we start — how we're measuring value
Every dollar in this doc is margin-adjusted LTV (mLTV) — what the customer paid us minus the cost to serve them. Cost model: deposit-only customers cost $0 clinically. Everyone else: $112 for the first attended appointment plus $38 for each subsequent one. Dollar figures only use Jan–Oct 2025 sign-ups (n = 8,240, avg 10.5 months tenure) so we have a fair revenue window. We use the newer cohort (10k customers since Nov 2025) to pressure-test whether the patterns hold. We excluded 834 "zero-payment" customers (700 sign-up noise + 134 likely refunds) because they distort averages.
How to read the confidence percentages
Every finding in this doc comes with a confidence % — think of it as "how sure are we this isn't random noise?" It's computed as (1 − p-value) × 100 from the appropriate statistical test.
>99.9%
Odds this is noise: < 1 in 1,000. Treat as fact.
95–99.9%
Statistically significant. Act on it.
80–95%
Directional. Test it, don't bet on it yet.
< 80%
Too close to random. Don't rely on it.
One important caveat: high confidence only means the effect is real, not that the effect is large or actionable. For example, the insurance signal is 100% confident but only moves mLTV by $42 per customer. And the quiz-goal signal is 100% confident in aggregate but dissolves entirely when you control for ongoing-intent (a confound). Always read confidence alongside effect size and the surrounding context.
01

The headline — our best customer is 6.7× our worst, and margin concentration is extreme

Our best customer type is worth 6.7× our worst, and the top 25% of customers produce 83% of all our margin. This concentration is the whole game — the rest of this doc is about how to find more of them.

Top 25% → share of margin
83%
2,060 customers produce $1.34M of our $1.61M mLTV
Average margin / customer
$196
$297 revenue minus ~$100 clinical cost, ~10 months in
Our best customer type
$518
Florida customers who said yes to ongoing treatment
Best vs worst gap
+$441
Per customer. In revenue terms the gap looked like +$584 — margin view is narrower because FL customers cost more to serve, but the ratio is actually larger (6.7× vs 5.0×)

Two signals do almost all the work. One thing flipped when we switched from revenue to margin.

1

Two signals — "yes to ongoing treatment" + state — explain most of the variance

Whether the customer says yes to ongoing treatment on the quiz is the single biggest predictor of value. Which state they live in is the second. Everything else is noise or a refinement. A Florida customer who said yes is worth $518. A North Carolina customer who said yes is worth $46. Same question, same quiz, same product — wildly different economics.

2

"Skipped the question" beats "said no" on margin — which is the opposite of revenue

In revenue: yes $336 > no $180 > skipped $124. In margin: yes $224 > skipped $98 > no $84. Skippers mostly pay the deposit and leave — we keep 100% of it. "No" customers attend one appointment that costs us $112, so we eat most of the revenue. The question isn't "are they paying more?" It's "are they paying more than they cost?"

3

Insurance customers are a bigger drag than revenue showed

In revenue, plans-to-use-insurance looked $31 below average. In margin they're $67 below average — it's our worst-paying bucket, because they attend more appointments for less revenue. The quiz question that captures this got dropped in October. Adding it back is cheap and gives us a usable filter at the top of the funnel.

02

The two signals that matter — and the four that don't

Two attributes — whether they said yes to ongoing treatment and which state they live in — explain nearly all the variance. The other four we tested are noise, confounded, or too small to act on.

01
Whether they say yes to "ongoing treatment" on the quiz Biggest single signal. Yes customers average $224 mLTV, no customers $84, skipped $98. Sorts differently in margin than revenue — skipped now beats no.
+21 pt gap>99.9% conf.
+$140yes vs no in mLTV
02
Which state they live in Same 7 over-perform (FL, TX, CA, WA, NJ, NY, PA). Same 20 under-perform. FL leads at 2.35× the average; NC at 0.23×.
+2.35×>99.9% conf.
+$215top 7 vs bottom 20 in mLTV
03
What they say about insurance Moved up from #4 in the revenue view. Plans-to-use-insurance is our lowest mLTV bucket. Question was dropped from quiz in Oct — worth re-adding.
−13 pt gap>99.9% conf.
+$42out-of-pocket vs plans-to-use
04
What goal they pick on the quiz Looks like a signal, isn't. See Section 03 — once you filter for ongoing-intent, goal flattens entirely (long-goal+yes = $334, short-goal+yes = $326). Don't use for targeting.
+9 pt gap>99.9% conf. (confounded)
+$65but dissolves when controlled
05
ADHD symptom score + time of day they signed up Symptom score: bins 0–17 all land between $190–$207. Only max-score (18) drops meaningfully to $182 — that one finding is real, but it's a $20 effect on one bucket. Time of day shows 1–2 pt blips (Sunday, 7pm) with no confidence. Neither is actionable as a targeting input.
~$20 gaptoo small to matter
n/adon't build targeting around these
Why we're ignoring items 4–5. Goal looks meaningful in aggregate but dissolves when you control for ongoing-intent (it's a proxy for the filter, not an independent signal). Symptoms and time-of-day both show real patterns but the dollar impact is ~$20 per customer — below the noise floor of normal campaign variance. Mention symptoms in creative if you want, but don't filter on them.
03

They're not independent — there's a filter, an amplifier, and a trap

State alone doesn't create value — it amplifies the ongoing-treatment filter. A Florida customer who skipped the question is worth $164. A North Carolina customer who said yes is worth $46. Neither signal works without the other.

The hierarchy
Filter
"Yes to ongoing treatment" The one question that does most of the sorting. Nothing bypasses it. Even a Florida customer who didn't say yes is worth only $178 mLTV — below the $196 cohort average. Without the filter, even Florida drops to mediocre.
Amplifier
State State multiplies yes-answers but fails to rescue no-answers. FL + yes = $518; FL + no = $178; Bot20 + yes = $113. State is a tier-multiplier applied on top of the filter — not a standalone signal.
Trap
Quiz goal Looks like a signal in aggregate. Isn't. Once you control for ongoing-intent, long-goal + yes = $334 and short-goal + yes = $326 — within $8 of each other. Do not use quiz goal as a targeting input.

The evidence. Three tests, one for each level of the hierarchy.

Test #1 — The amplifier
Does state alone make a customer profitable?
✗ No — Florida can't rescue a no-to-ongoing customer.
FL+yes vs FL+no · >99.9% confidence
FL + yes to ongoing n=482
$518
FL + no to ongoing n=66
$178
FL + skipped n=30
$164
A Florida customer who skipped the question is worth less than a Texas customer who said yes ($320). Never bid on state alone.
Test #2 — The filter's ceiling
Does the filter work everywhere?
~ Partially — filter helps in bottom-20 states, but there's a ceiling.
Bot20+yes vs Bot20+no · >99.9% confidence
Bot20 + yes n=2,761
$113
Bot20 + no n=564
$60
NC + yes to ongoing n=325
$46
Yes-to-ongoing nearly doubles Bot20 customers ($60→$113) but the ceiling is far below average. NC+yes is $46. The stim-state caveat bites hard in margin.
Test #3 — The trap
Is quiz goal an independent signal?
✗ No — goal dissolves once you filter for ongoing.
Long+yes vs Short+yes · 91% confidence (below significance)
"Dx + ongoing care" — all n=848
$317
Short "ADHD diagnosis" — all n=1,372
$252
Long-goal + yes n=756
$334
Short-goal + yes n=922
$326
Aggregate says long beats short. Filtered for yes, they're within $8. The aggregate pattern exists because short-goal pickers say yes less often. Goal is not a targeting input.

Each state's margin over- or under-performance

Showing only states where we have enough volume to read a signal. Multiplier above 1× = higher margin per customer than average (bar goes right). Below 1× = lower. Baseline = $196 mLTV per customer. The badge on the right shows how statistically confident we are the state differs from average — >99.9% means the odds of this being random noise are under 1 in 1,000. The top 7 are states where we can prescribe stimulants; almost all the bottom states are non-stim. That's not a coincidence — it's probably the mechanism.

← under-performsbaseline (1×)over-performs →
mLTV multiplier
04

Three segments — who to chase, who to skip

Three practical segments to run against. Peak ($518 mLTV) is our premium target. Scale ($361) is where most of our budget should live. Suppress ($77) needs to come out of main prospecting.

Average margin per customer — what each segment is actually worth

Navy baseline bar is the cohort average ($196). Everything above it is net-positive; everything below barely clears cost. Red-outlined bars are segments we're essentially breaking even on.

$0$150$300$450$600
mLTV/cust
Florida + wants ongoing Peak
Customer lives in Florida AND said yes to ongoing treatment on the quiz.
mLTV per customer
$518 avg
2.6× the cohort average
482
customers (5.8%)
$250K
mLTV (15.5% of total)
66.6%
in top 25%
$728
avg revenue / cust
Top 7 states + wants ongoing Scale
Lives in FL, TX, CA, WA, NJ, NY, or PA AND said yes to ongoing.
mLTV per customer
$361 avg
1.8× the cohort average
2,463
customers (29.9%)
$890K
mLTV (55.1% of total)
48.9%
in top 25%
$511
avg revenue / cust
Low-margin cluster Suppress
Bottom 20 states AND skipped/said no to ongoing. Or: short "ADHD diagnosis" goal AND didn't say yes.
mLTV per customer
$77 avg
40% of the cohort average
1,043
customers (12.7%)
$80K
mLTV (5.0% of total)
7.2%
in top 25%
$151
avg revenue / cust
05

What else is hiding in the data

Five things that emerged from going deeper — all genuinely non-obvious, all supported by the numbers above, all with direct implications for how we spend, measure, and scale. Each of these would have been easy to miss on a first read.

01
The first paid appointment is barely profitable — the economics live entirely at payment 3 and beyond.
6.7%
margin on the 1→2 payment step. Every step after that is 80%+ margin.

When a customer moves from "deposit only" ($38 mLTV) to "attended one appointment" ($46 mLTV), we add +$119 in revenue but +$111 in clinical cost. Net margin on that step: $8. Every subsequent step (2→3, 3→4, 4→5, 5→6+) is between 82% and 89% margin.

1→2
6.7%
2→3
83.7%
3→4
81.6%
4→5
84.7%
5→6+
89.4%
02
We don't actually have a retention problem — we have one specific conversion gate.
89%
per-step continuation rate after customers cross the 2→3 subscription gate. Only one step in the whole business drops below 80%.

If you compute the continuation rate at each step of the payment ladder, the pattern is binary. Before payment 3, most customers leave. After payment 3, almost nobody does.

1→2
69%
2→3
40%
3→4
81%
4→5
86%
5→6+
89%
03
Maxed-out ADHD symptoms is a negative signal — our ICP is moderate, not severe.
−4.2pt
mLTV signal gap for patients who score 18/18 on the ADHD scale. And getting stronger: older cohort was −2.0, newer is −4.2.

The symptom-score-to-mLTV relationship is not linear. It's an inverted U. The middle of the scale is our best customer; both extremes underperform, and maxed-out is actively the worst.

Score
Effect (pt)
Confidence
0–6
+0.2
77%
7–10
+0.3
45%
11–13
+0.1
85%
14–15
+2.0
93%
16–17
−0.5
47%
18 (max)
−4.2
>99.9%

Read this carefully: only the max-score bucket clears the statistical-significance bar. The shape of the inverted U is directionally right (the 14–15 positive bump is at 93%, close), but the one finding we can say with certainty is that 18/18 scorers underperform. The bifurcation observation below (#04) reinforces this — the newer cohort makes this finding stronger, not weaker.

04
The funnel is bifurcating — the best customers are getting better and the worst are getting worse.
+0.7 −3.9
the "short ADHD diagnosis" goal flipped from neutral to actively negative in the newer cohort.

Look at how every non-state signal shifted between cohorts. Ongoing-intent sharpened (+21 → +37). "Dx + ongoing care" sharpened (+9 → +15). Short diagnosis flipped (+1 → −4). Max symptoms sharpened negative (−2 → −4). Every signal is getting more predictive in both directions.

This isn't noise. It means our top-of-funnel is sorting customers better AND we're pulling in a higher share of low-intent customers who just want a letter. The middle is shrinking; the ends are growing.

Confidence: Each individual signal effect is >99.9% significant in both cohorts (see Section 08 table). The bifurcation pattern — that effects are strengthening in both directions simultaneously — is structural, not statistical; it's what you'd expect to see when your top-of-funnel starts pulling a wider persona range.

05
Payment and attendance are locked together — there's no autopay zombie keeping us paid.
0.5%
of Dx+MM customers paid 2+ subscription months without attending. The "ghost subscriber who forgot to cancel" isn't a meaningful cohort.

Across 3,376 Dx+MM customers, 92% show zero mismatch between payments and appointments. 7.5% have one extra payment (a month of normal subscription lag). Only 16 customers (0.5%) have 2+ wasted payments, and only one customer in the entire dataset has 3+. Payments track attendance almost one-to-one.

0
92.0%
1
7.5%
2
0.4%
3+
0.03%

Wasted payments per Dx+MM customer. "Wasted" = payments minus deposit minus attended appointments.

06

The hidden opportunity — the payment ladder

42% of our customers are stuck at 2 payments. Moving any one of them to a 3rd payment drives a 3.2× margin lift per customer. This is the single highest-leverage retention opportunity in the business, and it lives entirely with the lifecycle team.

Margin by payment count — the staircase

Each row shows avg revenue (navy), clinical cost (red), and the resulting margin (far right). The step from 2 → 3 payments is where margin actually starts accumulating. Anything that converts a 2-payment customer to a 3-payment one drives outsized leverage.

Deposit-only is fine. The 2-payment bucket is the real problem.

A

Deposit-only customers (30.9% of cohort) generate pure margin at $38 per head

They pay $38, never trigger a clinical cost, and leave. 100% margin. $97K total, 6% of mLTV. Don't try to "rescue" them into appointments they don't want. The economics here are fine as-is.

B

The 2-payment bucket is the biggest unclaimed margin pool in the business

3,433 customers — 41.7% of the cohort — made exactly 2 payments. Avg revenue $157, avg cost $111, avg mLTV $46. They're nearly half our customers and they generate 9.9% of our margin. Tip one of them to a 3rd payment and their value goes from $46 to $149 — a 3.2× lift. That's the single highest-leverage retention opportunity we have. This is where the lifecycle team should be investing, full stop.

C

Subscribers (3+ payments) are 27% of customers but 84% of margin

2,259 customers generating $1.35M in mLTV. The 6+ payment bucket alone — 1,395 people, 16.9% of the cohort — generates 71.8% of all our margin at $829 per customer. This is the prize we're optimizing for.

Why the 2-payment bucket leaves — first-party cancellation data

We have direct cancellation-survey data (Typeform, Q1 2026) that tells us why these customers stop at 2 payments. It differs dramatically by state type. In states where we can't prescribe stimulants, people leave over the product limitation. In states where we can prescribe stimulants, they leave because the provider interaction fails.

Limited states (bottom 20 — can't prescribe stimulants)
"I didn't get the medication I expected"
Cohort A · n=38 cancellations · Jan–Mar 2026
Couldn't get expected medication (stimulant surprise)
24%
Monthly cost was too high
21%
Decided to get care elsewhere
16%
Provider communication / responsiveness
13%
Insurance / prior auth issues
5%
26% of cancellers expected stimulant medication at signup. 45% said monthly billing wasn't clearly explained. This is mostly a pre-funnel disclosure problem — they sign up, then discover the limitation, then leave.
Full-coverage states (top 7 — CAN prescribe stimulants)
"My provider wasn't there for me"
Cohort B · n=43 cancellations · Jan–Mar 2026
Provider communication / responsiveness
26%
Monthly cost was too high
19%
Insurance / prior auth issues
9%
Got care elsewhere
9%
Couldn't get expected medication
7%
Clinician ratings average 6.2/10 in full-coverage states (vs 7.2 in limited). When the structural excuse is removed, provider quality is what's failing. This is a service-delivery problem the growth team can't fix — but it's the constraint on the 2→3 retention play.
83%
of no-shows want to reschedule — they're not leaving, they're slipping. Meanwhile, 43% of people whose provider no-showed them refuse to reschedule (vs only 2% of self-caused no-shows). This population is high-intent and largely recoverable. Nobody is systematically recovering them. This is a massive retention leak hiding in plain sight.

What to actually do about it — the retention playbook

Four tactical interventions, ranked by leverage. All of these are cheap to test and directly tied to the findings above.

Intervention #1 · Lifecycle
Sub-5-minute provider no-show recovery

When a provider no-shows, trigger recovery outreach within 5 minutes — immediate rebook with a different clinician, no forms, no friction. Currently 43% refuse to reschedule after a provider no-show; industry service-recovery data suggests instant resolution can bring that below 25%. The gap between "immediate" and "next-day" is worth millions in mLTV.

Intervention #2 · Lifecycle
2→3 payment sequence for the stuck bucket

3,433 customers are at exactly 2 payments. Build a dedicated post-first-appointment sequence specific to them — lead with "your ADHD Success Plan" framing, reinforce the provider relationship by name, make the 3rd booking one tap. Tipping any single customer lifts mLTV 3.2×.

Intervention #3 · Product
Pre-quiz stimulant-state disclosure

In limited states, 26% of cancellers expected stimulants at signup. Disclose the state-level prescribing restriction before the quiz (not after payment). Yes, this will reduce top-of-funnel volume. But every bad-fit signup that makes it to payment is a refund or chargeback waiting to happen, and they're mostly in the Suppress segment anyway.

Intervention #4 · Retention
Billing clarity fix

45% of limited-state cancellers said monthly billing wasn't clear. The $49 → $150 → $130/mo cost stacking surprises people. A clearer subscription explainer at booking — pre-authorization preview, calendar with billing dates, plain-language summary — removes the "predatory" / "bait & switch" language showing up in surveys and on Reddit.

07

Not all providers are equal — a look inside Dx+MM

Zoom into Diagnosis + Medication Management — the package where the subscription-gate question actually lives — and a 1.62× spread opens up between our best and worst providers on identical product. The pattern that matters isn't who attends. It's who comes back.

Scope note — why this section is narrower
The full-dashboard analysis above uses all package types. This section isolates the Dx+MM package only (n = 3,376; $511K revenue; $287K mLTV). This is the package where customers can progress to subscription — making it the one that matters most for ongoing margin. It's also a very young cohort: 98% of sign-ups are Feb–Apr 2026 and the median patient is 1.05 months old. Raw avg GP per provider is therefore dominated by whose patients signed up earlier, not who retains better. To compare fairly, we use a maturity-adjusted index — each provider's actual avg GP divided by the cohort-wide expected GP at that provider's specific tenure mix. An index of 1.20 means "20% above the expected GP for a patient of this age"; 0.80 means "20% below." Cross-validated against the % of each provider's 1.5mo+ tenured patients who crossed the 2→3 payment gate (r = 0.75 between the two measures). Providers with n < 30 excluded.
Performance spread
1.62×
Top provider (Dantwan Smith, index 1.25) vs bottom (Tiney Ray, 0.77) on same product. Bootstrap 95% CIs don't overlap.
Providers in sample
40
All providers with 30+ Dx+MM patients. They account for 98% of the package's volume.
Bottom 5 share of volume
7.3%
245 Dx+MM customers routed to providers running 0.77–0.88× expected. Reroutable without spending a dollar on acquisition.
"Attend but leak" providers
5
Above-average first-appointment attendance, bottom-quartile subscription conversion. The damage happens inside the appointment.

Each provider's maturity-adjusted mLTV index

Showing all Dx+MM providers with 30+ patients, sorted best to worst. Above 1× = customers worth more than expected for their tenure (bar goes right). Below 1× = worth less. Baseline = the Dx+MM cohort average. Spread is 1.25× to 0.77× — a 62% performance gap on identical product, after controlling for patient maturity. Badge on the right shows confidence that the provider's result isn't random noise at this sample size — 95%+ is the conventional significance threshold.

← under-performsbaseline (1×)over-performs →
mLTV index
Stickiest — "Show up and come back"
High attendance + high subscription conversion
These providers get patients in the door AND keep them coming back. Caroline Tomlinson's 68% gate-crossing rate is 2.1× the cohort average.
Caroline Tomlinson
65% attend
68%
Paula Copeland
67% attend
58%
Chris Blaisdell
89% attend
60%
Dorly Nerval
47% attend
51%
Vincent Covelli
63% attend
50%
Cohort gate-crossing rate: 33%. These providers nearly double it.
"Attend but leak" — the real problem signature
High attendance + abnormally low subscription conversion
Patients show up for the first visit. They just never come back. This is the leak pattern that maps to Section 06's Cohort B cancellation data — "provider communication/responsiveness" cited by 26% of full-coverage state cancellers.
Michelle Pourtabib
67% attend
5%
Ann Gilchrist
56% attend
9%
Jennifer Davis
59% attend
11%
Gina Chamungwana
85% attend
13%
Amber Patterson, FNP-BC
61% attend
17%
The first appointment is happening. The relationship inside the appointment is what's failing. Worth sitting in on session recordings.

Three interventions, ranked by how cheaply they unlock margin.

A

Reroute the bottom 5 — Tiney Ray, Amber Patterson, Irene Olabode, Ann Gilchrist, Anthony Kane

They run at 0.77–0.88× expected mLTV and handle 7.3% of Dx+MM volume. Shifting their new-patient queue to the top 10 providers costs nothing and captures 5–8 percentage points of blended index lift for the package. This is the cheapest margin in the deck.

B

Audit the "attend but leak" five specifically — Michelle Pourtabib, Ann Gilchrist, Amber Patterson, Gina Chamungwana, Jennifer Davis

The leak is post-attendance, so the fix is clinical quality, not scheduling. Sit in on recordings of their first appointments. Compare to recordings from Caroline Tomlinson or Paula Copeland. What Caroline is doing that Michelle isn't — that's the retention playbook. Rolling that knowledge out is the 2→3 gate play the lifecycle team can't touch directly.

C

Don't fire Kevin Williams

He looked like a top-3 provider on raw-GP analysis because most of his volume is in other packages (Initial Consultation, where he's legitimately the best). In Dx+MM specifically he's solidly above median (1.07 index) but not exceptional. The Dx+MM product is new for everyone. Expect rankings to shift as mature patients accumulate through Q3.

Two caveats before pulling this trigger. First, 149 Dx+MM patients are assigned to Mitchell Kohl and 181 to Andres Jimenez — if these volumes reflect default auto-assignment rather than patient choice, the fix is routing logic, not provider performance management. Worth checking. Second, we don't know from this data whether the bottom providers are FNPs, LMHPs, or MDs — provider specialty may explain part of the spread. Before acting on individual names, verify both.

08

Does this hold for newer customers? Yes.

Every directional finding above is based on Jan–Oct 2025 sign-ups. We re-ran the full analysis on 10,000 newer customers. Every finding holds — and ongoing-intent is actually stronger in the newer cohort.

Finding Older cohort Newer cohort Confidence
(older cohort)
Verdict
Said yes to ongoing treatment +21.3 +36.7 >99.9% Much stronger
Top 7 states (FL, TX, CA, WA, NJ, NY, PA) +33.0 +18.7 >99.9% Holds, compressed
Bottom 20 states −31.1 −13.8 >99.9% Holds, compressed
Florida specifically +12.3 +9.2 >99.9% Holds
Picked "Diagnosis & ongoing care" +8.9 +15.3 >99.9% Stronger
Maxed out ADHD symptom scale (18) −2.0 −4.2 >99.9% Stronger (neg)

Net: every finding in the table is statistically significant at above 99.9% confidence in the older cohort — these aren't close calls. The "compressed" verdict on top-7 and bottom-20 states just means the effect magnitudes shrank as the Dx+MM product rolled out (the mix of packages customers sign up for has shifted); the direction and significance are unchanged. Peak segment (Florida + yes) is already 1.3× the average after only 2.4 months — the gap vs Scale and Suppress will widen over the next 6–9 months as these customers accrue subscription payments.

09

What we're actually going to do about it

Five owners, concrete actions, real numbers. Everything below is grounded in the analysis we just walked through — and most of it is self-actionable inside the growth team.

Before the actions — mLTV by segment
What each segment is actually worth to us
Segment mLTV
Peak — FL + yes to ongoing $518
Scale — top 7 + yes to ongoing $361
Blended cohort average $196
Suppress — Bottom 20 + no/skipped $77

These are margin-adjusted dollars at ~10.5mo of tenure — what each customer actually nets us after serving them. Peak customers are worth 6.7× Suppress customers, which is the gap the paid and SEO plays below are trying to exploit.

Andrew — Paid Social
Stand up a Florida-only prospecting campaign
  • Florida-only prospecting campaign with ongoing-medication-management messaging. FL customers who say yes to ongoing are worth $518 mLTV — 2.6× the cohort average — and justify their own dedicated ad set instead of riding national campaigns.
  • Creative angle: lead with identity relief ("finally understanding why you've been struggling"), not speed or price. Everything in Section 05 argues this is what Peak responds to.
  • Exclude the Bottom 20 states from main prospecting at the ad-set level (location targeting, not audience upload — we can't use customer lists in healthcare). If budget forces Bottom-20 to keep running, isolate it in its own campaign so it doesn't contaminate the signal in the main prospecting learner.
Gabriel — Paid Search
Recalibrate state-level bids from margin, not revenue
  • State bid adjustments from mLTV: FL +27%, NJ +11%, NY 0%, WA −7%, PA −19%, TX −19%, CA −29%. California is the biggest change — revenue looked good, but CA customers generate more appointments so margin lags.
  • Lower bids in the 20 low-performing states by 50–70%. North Carolina specifically should be down 77%.
  • Add negatives: just need adhd diagnosis, adhd test only, one-time adhd assessment. These attract the Suppress persona.
Ashley & Grant — SEO / Content
Aim every new page at the Peak/Scale persona, not volume
  • Stim-state landing pages. Build dedicated pages for each of the 21 stim states (at minimum FL, TX, CA, NY) optimized for adhd medication management [state], online adhd medication [state]. These are high-intent queries and the 7 stim states in our top cohort are exactly where high-mLTV customers are searching.
  • Own the "is ADHD Advisor legit" query. The #1 conversion blocker per customer voice is Reddit skepticism. A transparent comparison page + review rollup directly addresses the trust gate. One Trustpilot reviewer said they almost didn't convert because of r/adhdwomen — this is not hypothetical.
  • De-prioritize volume queries that attract Suppress traffic. "Cheap ADHD diagnosis," "ADHD test only," "one-time ADHD evaluation" all pull in the wrong persona.
  • Identity-relief content. "Late-diagnosed ADHD in women," "ADHD in your 30s/40s" — these queries attract Peak psychographically. Our #1 positive theme in reviews is "weight lifted" from late diagnosis.
Sandhya & Justine — Retention / Lifecycle
Own the 2→3 tipping point and the no-show recovery loop
  • Ship the sub-5-minute provider-no-show recovery automation (Intervention #1 in Section 06). Currently 43% of provider-no-show customers refuse to reschedule; bringing that below 25% is probably worth six figures in mLTV.
  • Build the dedicated 2-payment sequence. 3,433 customers are sitting there. Lead with the provider-by-name relationship ("Dr. [X] put together your next steps"), reinforce the treatment plan, make the 3rd booking one tap.
  • Launch the 2→3 conversion as the team's primary north-star metric for Q2. Every % lift is worth $1.5K+ in mLTV.
  • Leave deposit-only customers alone — they're 100% margin as-is and don't want to be rescued.
Product / Growth Infrastructure
Two high-leverage infrastructure fixes
  • Re-add the insurance question to the quiz. Plans-to-use-insurance is our lowest mLTV bucket ($129 vs $196 average). Question was dropped in October — bringing it back is cheap and gives us a usable filter at the top of the funnel.
  • Surface stimulant-state limitations pre-quiz for visitors in limited states. Will reduce top-of-funnel volume. Will also reduce Suppress-segment bad-fit enrollment, refund rate, and Reddit-sentiment damage. The margin math says we're losing money on most of those signups anyway.
10

Important caveats before acting on this

Three things to flag before acting on any of this. The stimulant-prescribing caveat is the most important — it's probably the structural fact underneath the entire state pattern.

  1. The state finding is almost certainly about stimulant prescribing, not customer quality. All 7 top-performing states let us prescribe stimulants. Most of the bottom 20 don't. So what looks like "FL/TX customers are better" mostly reflects "we can only fulfill the product these customers actually want." The NC+yes bucket at $46 mLTV is the clearest evidence: the filter doesn't rescue customers we can't serve. This doesn't change the targeting conclusion, but it changes how we think about the root cause.
  2. The cost model is simple on purpose. We use $112 for the first appointment and $38 for each subsequent one. No infrastructure overhead, no provider-pay variation, no no-show handling. Good enough for ad-bid ceilings. For strategic pricing decisions, pressure-test against actuals.
  3. We excluded 834 "zero-payment" customers who distort averages: 700 never paid and never showed (sign-up noise), plus 134 who got appointments but have no payment on record (likely refunds/chargebacks — cost is real, revenue is zero). Neither is a targetable segment.
  4. We don't know which ads these customers came from. No UTM data in the file. Andrew and Gabriel will need to cross-reference against Meta and Google to know which campaigns are driving which segment.
  5. Older cohorts have lower mLTV than newer cohorts at matched tenure — pricing and product have improved. These figures are probably a conservative estimate of what a brand-new customer will be worth once they've accrued 10+ months.