In September 2024, Matthew Gallagher launched Medvi with $20,000 and no team.
By the end of 2025:
250,000 customers.
$401 million in revenue.
16.2% net margin.
His only employee was his brother Elliot.
His 2026 revenue projection: $1.8 billion. Not a valuation — Medvi has no outside funding and no official valuation. A revenue run rate extrapolated from early-2026 numbers. Worth saying clearly upfront.
When the New York Times ran the profile on April 2nd, it went viral immediately. Sam Altman emailed the journalist to say he'd apparently won a bet he'd made with tech CEO friends — he'd predicted a one-person, billion-dollar AI company would emerge. He wanted to meet the founder. The paper of record had handed him the anecdote he needed.
Within 24 hours, a parallel story emerged. An FDA warning letter. A class action lawsuit. Hundreds of fake AI-generated doctor accounts. Deepfaked patient testimonials. Questions the Times had either missed or buried past paragraph thirty.
This issue covers both stories in full. Not to glamourise what happened — the methods don't hold up. But to separate the model from the methods, because those are separable things. The deception didn't build the business. The model did. And the model is worth understanding.
There are also two other threads running alongside this that matter for how you build in 2026: Jack Dorsey's essay on the death of the org chart, and what the Palantir CEO said about who wins in an AI-driven world.
Let's get into it.
What He Actually Built
Gallagher's story starts with Watch Gang, a luxury watch subscription box he scaled to 300,000 members and then 60 employees. The team made everything slower and more expensive. He sold the business and took one lesson with him: headcount is friction disguised as growth.
In late 2024, he spotted a gap. GLP-1 drugs — the compound behind Ozempic and Wegovy — had created a $50 billion demand wave. But the access model was still running on pre-digital infrastructure. GP referrals, 6-week waits, insurance complexity, pharmacy delays. Demand: enormous. Friction: enormous. The gap between those two things was the business.
He didn't fix healthcare. He fixed one pipe.
The clinical infrastructure already existed. CareValidate handled doctor verification. OpenLoop Health handled compounding pharmacy fulfilment. Stripe handled payments. He orchestrated them with a digital layer built almost entirely by AI. Claude and ChatGPT wrote the backend code. Midjourney and Runway produced every creative asset. ElevenLabs generated voiceovers. AI agents handled the complete patient journey from intake form to prescription dispatch. No manual handoffs.
Gallagher's role: brand direction, pricing decisions, acquisition strategy. Everything else was system.
Start to launch: 60 days.
Month one: 300 customers.
Month two: 1,300.
Year one: $401 million.
Dorsey's Thesis and Why It Connects
Two weeks before the Medvi story broke, Jack Dorsey co-authored an essay with Sequoia partner Roelof Botha titled "From Hierarchy to Intelligence."
The argument is simple and structurally important. Corporate hierarchy has always existed to solve one problem: routing information through organisations too large for any single person to oversee. Managers aggregate context from below, pass instructions from above, and keep teams aligned. That function — information routing — is now something AI can perform continuously and at scale. Which means the reason hierarchy existed is being automated away.
Dorsey proposes two AI-driven "world models" to replace management layers. One aggregates internal data from decisions, code, and workflows to create a continuously updated picture of operations. The other maps customer behaviour through transaction data. Together, they feed an intelligence layer that composes responses dynamically — replacing the product roadmap with a system-generated backlog.
The human organisation collapses to three roles: individual contributors who build and operate, directly responsible individuals who own specific outcomes on 90-day cycles, and player-coaches who combine building with developing people. No permanent middle management layer.
Dorsey wrote this after cutting 40% of Block's workforce — approximately 4,000 people — in February 2026. Critics have noted the timing. The essay arrived as a narrative frame for mass layoffs that had already happened.
Current and former Block employees told The Guardian that roughly 95% of AI-generated code changes still require human modification, and that AI tools cannot yet lead in regulated environments. The essay describes a direction, not a finished system.
But here is why both Dorsey and Gallagher are pointing at the same thing from different directions.
Gallagher built Medvi as a Dorsey-model company from day one, without knowing the theory. No middle management. AI as the information routing layer. Three effective roles: him as DRI and player-coach, Elliot handling communications, and an AI stack as the operating layer. The world model is implicit in his orchestration logic — patient intake feeds doctor verification feeds pharmacy dispatch feeds Klaviyo sequence, all connected, all automated, all drawing from the same signal.
Dorsey's question — "What does your company understand that is genuinely hard to understand, and is that understanding getting deeper every day?" — is the diagnostic every CPG founder should be running on their own business right now. If the answer is nothing, AI is just a cost optimisation story. If the answer is deep — retention economics, customer psychology, offer architecture — AI amplifies it.
The Customer Acquisition Game Has Changed
This is where the Medvi story gets uncomfortable — and where the most useful lesson for your brand lives.
Gallagher grew to $401 million primarily through paid acquisition. But the playbook he used, and the playbook his affiliates used without his stated knowledge, is worth examining in full.

Investigators found over 800 Facebook accounts for allegedly fake doctors used to advertise Medvi's products. AI-generated patient testimonials. Deepfaked before-and-after weight loss images. Media logos implying editorial coverage that had never been received. A ticker on the homepage suggesting Bloomberg and The Times had featured Medvi when they had only taken its advertising money.
The acquisition machine was running at scale. AI had automated scripting, creative generation, and outreach across hundreds of ad accounts simultaneously. This is the AI-first acquisition game in its most aggressive form: not one ad creative tested at a time, but hundreds of angles deployed in parallel, AI generating variants faster than any compliance team can review them.
Gallagher's public response was that the fake doctor accounts were affiliates acting without authorisation, and he was "watching in real time as people learn about white label, drop shipping, and affiliate marketing." Whether that explanation holds up or not, it points at something real: when you build an AI-native acquisition engine and open it to affiliates, the system scales faster than your ability to police it. The moat breaks from the inside.
For CPG and wellness brands, this is a direct warning. The same infrastructure — AI-generated ad variants at scale, AI-scripted outreach, automated creative testing across hundreds of hooks simultaneously — is available to anyone right now. The brands running it well are pulling ahead in CAC efficiency. The brands running it badly, or running it through poorly controlled affiliate networks, are building a liability.
The question is not whether to use AI in acquisition. That debate is over. The question is where human judgment sits in the system, and whether your affiliates are extending your brand or contaminating it.
The Full Controversy — What the Times Missed, and How to Read It
Six weeks before the Times profile ran, on February 20, 2026, the FDA sent Medvi warning letter number 721455 for misbranding its compounded drugs. The letter warned that Medvi's marketing implied FDA approval of compounded products. The FDA stated: "Failure to adequately address any violations may result in legal action without further notice, including seizure and injunction."
The Times did not mention the letter.
A class action lawsuit, James v. Medvi LLC, was filed in federal court on March 20, 2026 — 13 days before the profile ran — alleging Medvi benefits from affiliate spam. The class allegedly numbers at least 100,000 people. The Times did not mention that either.
Gallagher's position on both: the FDA letter was addressed to an affiliate operating medvi.io without authorisation, not to Medvi's primary domain medvi.org. The fake doctor accounts were affiliates acting outside his control. He has said he's "watching in real time as people learn about white label, drop shipping, and affiliate marketing."
Whether those explanations hold legally is a question for the courts. What they reveal operationally is important regardless: the acquisition engine scaled faster than the governance layer. That gap — not the model itself — is where the liability lives.
Here is where the Techdirt takedown of the Times piece lands hardest. The NYT framed Gallagher's use of deepfaked before-and-after photos as "shortcuts he later fixed." Investigators found that after he supposedly fixed them, the same fake names appeared with entirely different fake faces — the con was updated, not removed. The paper that Medvi had fraudulently included in its media credibility ticker then wrote the profile that gave Medvi genuine credibility. The irony is not subtle.
And the $1.8 billion figure the Times led with? The paper itself buried the admission: Medvi has no outside funding and no official valuation. What it has is a revenue projection. Not nothing — $401 million in year one revenue is real and verified. But "a $1.8 billion company" is a framing choice, not a fact.
Here is how to read all of this.
The deception didn't create the opportunity. It exploited it faster and more recklessly than it needed to. A version of the same model — find the clogged pipe, orchestrate existing infrastructure, own the connector layer, use AI as labour — works completely without the fake doctors, the deepfaked testimonials, or the affiliate spam. It just works at a different scale and without the regulatory exposure.
You're not building toward $1.8 billion in revenue by any means necessary. You're building a £5M-£50M consumer brand with your name on the label, a relationship with your customer, and a reputation that compounds over time. The ceiling is different. The ethics are non-negotiable. The first principles underneath transfer completely.
That is the right way to read this story. Not as a blueprint. As a proof of concept for the model — with a clear object lesson in what happens when you strip out the governance layer entirely.
The Neurodivergent Advantage — and Why It's Suddenly Credible
In March 2026, Palantir CEO Alex Karp said something that cut through the usual noise about the future of work.
"There are basically two ways to know you have a future. One, you have some vocational training. Or two, you're neurodivergent." Karp — who lives with dyslexia — argued that the people who will thrive in an AI-driven economy are the ones who think differently, take risks, and see connections others don't: "more of an artist, look at things from a different direction, be able to build something unique."
One-fifth of Fortune 500 companies are expected to actively recruit neurodivergent talent by 2027, according to Gartner. Palantir already runs a dedicated neurodivergent fellowship.
The argument is structural, not sentimental. The ADHD brain scans broadly rather than narrowly. It holds multiple frames simultaneously. It generates connections between domains that conventional thinkers don't put together. These are exactly the cognitive properties the pre-AI economy punished — the scheduling friction, the executive function tax, the thousand small decisions between having an idea and doing something with it.
AI removes that tax. 88% of neurodivergent employees report being more productive with AI assistance. The tool that's threatening most people's careers is removing the primary friction that was holding this group back.
Gallagher's pattern maps to this without the diagnosis: obsessive multi-domain learning, resistance to conventional hierarchy, systems thinking across consumer psychology and code literacy simultaneously. That cognitive style — broad connection over deep specialisation — is what the new economy is selecting for. The question is whether you're using it.
What This Means for CPG and Wellness Brands in 2026
Let's be direct about the translation. You are not Matthew Gallagher. You are not trying to hit $1.8 billion in revenue projections by any means necessary in a regulatory grey zone with a two-person team and no long-term brand to protect.
You are building something more durable. A £10M-£100M consumer brand with a product that works, a customer who trusts you, and a subscription model that compounds over time. That is a different game with different constraints — and the Medvi principles map to it better than the Medvi methods ever could.
Here is what transfers cleanly.
Pipelines are the new moat.
Not products. Not ad creative. Not email flows in isolation. The complete integrated system from cold acquisition to retained subscriber — and how tightly those parts connect — is what creates compounding LTV. If your system has manual handoffs, unconnected data, or acquisition that doesn't feed retention, you have a pipeline problem dressed as a growth problem. Fix the pipe before filling it.
AI-first acquisition is the new baseline, not the edge.
The brands winning on Meta right now are running AI-generated creative at a scale that manual production cannot match. Dozens of angles tested simultaneously. Scripts generated and iterated in hours. This does not require a large team. It requires a clear brief, a systematic testing framework, and a human judgment layer that reviews what the machine produces before it runs. Gallagher used AI to generate creative at scale. The difference between him and you is not the tool — it's the governance layer you maintain and he didn't.
MCPs and connected intelligence are the next infrastructure layer.
Model Context Protocols — the connectors that allow AI tools to pull from live data sources, Shopify, Klaviyo, your subscription platform — mean that the diagnostic capability Gallagher built into his patient intake system is now available for any subscription brand. An AI that reads your Shopify cohort data, cross-references it with Klaviyo flow performance, and surfaces which subscribers are at churn risk before they cancel — that system is buildable today without a development team. That is your world model. Build it.
The human edge is narrower than you think — and more valuable.
If AI handles information routing, the humans in your business should be doing exactly four things: strategic judgment on offers and positioning, relationship-building with customers and partners, creative direction on what gets made, and governance on what goes live. Everything else is system territory. Gallagher understood this. The brands that don't are competing with significant structural overhead against the ones that do.
A £100M consumer brand built cleanly beats a $1B revenue projection built recklessly.
This is the real frame. The Medvi model at its cleanest — orchestrate infrastructure, own the connector layer, use AI as labour, maintain a human judgment layer — is a blueprint for building a capital-efficient, high-margin subscription brand without the regulatory exposure, without the reputational risk, and without the affiliate liability. The ceiling is lower. The floor is considerably more stable.
The Rule of One™ Lens
Layer | What Gallagher Did | What a Clean Version Looks Like |
|---|---|---|
Infrastructure | Orchestrated what existed | Same — rent, don't build |
Acquisition | AI at scale, governance absent | AI at scale, human review layer in place |
Pipeline | Owned the connector between demand and fulfilment | Same — this is the moat |
Governance | Thin — created regulatory and legal exposure | Non-negotiable — brand is the long game |
Human edge | Brand, pricing, strategic judgment | Same — protect this time ruthlessly |
Moat | Speed, no depth | Retention data + brand trust — compounds |
The model was sound. The execution was ethically compromised. Those are separable things — and only one of them is worth copying.
The Techdirt piece, the FDA letter, the class action — none of that breaks the underlying logic of finding a clogged pipe and building the orchestration layer to clear it. It shows what happens when you strip out governance entirely and optimise purely for speed. For CPG founders building something with their name on it, that trade-off was never on the table.
Three Takeaways
1. Know exactly what your affiliates are saying — today, not next month.
Gallagher's story shows that the acquisition system scales faster than oversight does. Regulatory exposure doesn't distinguish between you and your affiliates when the letter arrives. If you have any form of affiliate, influencer, or referral programme, audit what is being published in your name this week. The speed of AI-generated content means the gap between brief and brand contamination is narrower than it has ever been.
2. Run an AI acquisition audit before your next campaign.
Review the last five ad creatives your brand produced. How many variants did you test? How long did each take? What would it look like to run 20 variants simultaneously — generated in a day, with a human reviewing for compliance and quality before deployment? If that process doesn't exist yet, you are competing at a structural disadvantage. Build it before your next spend cycle.
3. Map your pipeline before you fill it.
Before your next paid push, map the complete journey from cold click to month-three subscriber. Every connection. Every manual step. Every data source that doesn't talk to the next one. A leaking pipeline doesn't benefit from higher pressure — it just leaks faster. Fix the pipe first. The brands that do this before scaling acquisition are the ones that build compounding LTV instead of compounding churn.
If this gave you a sharper read on where the industry is heading, forward it to a founder who needs it.
If you want to know whether your subscription infrastructure is built to compound or just to convert, start here.
Kunle Campbell | Conscious Commerce Co. Helping challenger CPG and wellness brands build profitable subscription-first infrastructure using the Rule of One™. consciouscommerceco.com


