April 05, 2026
The Big Think Interview · email · 37 mins
Johns Hopkins neuroscientist David Linden — who was diagnosed in ~2021 with synovial sarcoma (a tumor the size of a Coca-Cola can lodged in his heart wall, prognosis 6–18 months) — explains the biology behind three supposedly mystical mind-body effects, and how understanding them is reshaping treatment of cancer, cardiac disease, and pain.
The body sends signals to the brain through three distinct channels: fast electrical signals via the spinal cord, slow hormonal signals via the bloodstream, and a subtler mechanism in which the brain detects the rhythm of breathing and the slight arterial dilation of each heartbeat. The brain responds in kind through the somatic motor system (voluntary movement), the autonomic nervous system (sympathetic “fight-or-flee” vs. parasympathetic “rest-and-digest”), and broadcast hormones released through the pituitary and adrenal glands. Because so many disease processes — sleep, appetite, cancer, autoimmune disease — are under brain control, behavioral interventions can reach them through these same channels.
The gut is far more sophisticated as a sensing organ than textbooks convey. The stomach can distinguish water from food and assess macronutrient content (fat, protein, carbohydrates) in real time. Twenty minutes later, sensors in the small intestine evaluate nutrient content again. Artificial sweeteners fool oral sugar receptors but cannot fool gut receptors, which are molecularly different — this conflicting signal is a likely reason artificial sweeteners reduce weight loss effectiveness.
GLP-1 (glucagon-like peptide-1) is secreted by neurons lining the small intestine when food is detected; it slows gastric motility and suppresses appetite for minutes to a couple of hours. Natural GLP-1 degrades within minutes, but Novo Nordisk chemists attached fatty acids to the molecule, causing it to bind albumin in the blood and become resistant to enzymatic degradation and kidney excretion. The result — semaglutide (Wegovy/Ozempic) and tirzepatide (Zepbound/Mounjaro) — produces 12–17% body weight loss over many weeks with once-weekly injections.
GLP-1 receptor drugs appear to do more than suppress appetite: the GLP-1 receptor is present in the heart, kidneys, and liver, and the drugs show anti-inflammatory effects that outpace what weight loss alone would predict. Early evidence also suggests they curb alcohol consumption, reduce use of psychoactive drugs, and address compulsive behaviors — suggesting a broader role in reward circuitry modulation. The critical limitation is permanence: weight fully returns when the drugs are stopped.
Intensive exercise suppresses appetite through a distinct molecular pathway: it generates lactate, which conjugates with the amino acid phenylalanine, and the resulting compound triggers a biochemical cascade that reduces hunger. This is separate from exercise’s effects on mood and cognitive function with age. The average American is 27 pounds heavier than in 1960 — a shift caused not by genetics but by food corporations systematically exploiting ancient “pack-on-fat” circuits through engineered, highly processed foods.
“Voodoo death” — named and documented by Harvard physiologist Walter Cannon in 1942 — is real and biological. Cannon identified sympathetic nervous system hyperarousal as the mechanism, but modern understanding adds a second punch: the parasympathetic system remains paradoxically elevated for hours to days, and the hypothalamic-pituitary-adrenal axis floods the body with cortisol. Together these produce a physiological cascade that can kill. The critical prerequisite is belief: the phenomenon only works in people whose worldview makes a hex or curse credible. If the witch doctor reverses the curse before death, the person recovers.
Voodoo death has a direct Western-medicine equivalent. A documented 1970s case: a man was diagnosed with liver cancer and told little could be done; he asked only to live through Christmas with his family. He died just after New Year’s. Autopsy showed he did not have liver cancer and had nothing seriously wrong with him. He was killed by belief — by the same autonomic cascade that kills curse victims — delivered by physicians in white coats rather than shamans with bones and feathers.
Broken heart syndrome (Takotsubo cardiomyopathy) provides a mechanistic bridge between grief and cardiac death. First identified by Japanese doctors in elderly bereaved people, the condition is named after a Japanese octopus trap (“tako” = octopus, “subo” = trap) because the heart deforms to resemble that shape. It is caused by sympathetic nervous system over-activation, impairs cardiac function, and is fatal in some cases. Epidemiologically, people are measurably more likely to die of cardiovascular incidents, cancer, and autoimmune disease in the period following the death of a long-term partner, close friend, sibling, or sometimes a pet.
The placebo effect for pain is mediated by endorphins and enkephalins — the brain’s endogenous opioid molecules. This is proven pharmacologically: the drug naloxone, which blocks opiate receptors, also blocks placebo pain relief. Placebo effects extend beyond pain to post-surgical healing rates, immune system modification, and blood pressure reduction; the blood pressure effect can be classically conditioned (pairing a drug with a lemon-lime flavor solution, then producing partial reduction with the flavor alone — a direct Pavlovian mechanism).
The placebo effect for pain has been growing stronger for decades — but only in the United States and New Zealand. This geographic specificity is otherwise anomalous. The leading hypothesis: the US and New Zealand are the only countries that permit direct-to-consumer advertising of prescription drugs, which amplifies belief in drug efficacy and thereby amplifies the expectation-driven endorphin response. The practical consequence: pharmaceutical trials increasingly struggle to show drugs perform significantly better than placebo.
Open-label placebos work — telling a patient “I am giving you a sugar pill” still produces measurable placebo benefit. This is counterintuitive but has been replicated consistently enough that entire branches of medicine now deploy open-label placebos therapeutically. The implication is that conscious knowledge of the deception does not fully override the expectation-based neurochemical mechanism.
Tumors actively recruit nerve fibers by secreting neurotrophins — nerve growth factor (NGF) and brain-derived neurotrophic factor (BDNF) — which cause nerve cells to grow toward and penetrate the tumor mass. This is not passive; it is a co-option strategy. When tumors are innervated, prognosis worsens. Preventing tumor innervation is now a distinct therapeutic target in oncology.
The nerve-cancer relationship is bidirectional and immunosuppressive. Pain-conveying sensory neurons inside tumors secrete calcitonin gene-related peptide (CGRP), which suppresses CD8+ T lymphocytes — the immune cells that patrol tumor edges and kill peripheral tumor cells. More innervation → more CGRP → more immune suppression → faster tumor growth. Tumors also send chemical signals that attract blood vessels and produce signals that prevent immune cells from recognizing the tumor as foreign (“no problem here”).
Stress hormones promote cancer growth and spread. Some cancers express beta-adrenergic receptors — receptors for noradrenaline/norepinephrine. Beta-blocker drugs (already used for heart rate control and anxiety) slow progression in those cancer types. This provides a pharmacological mechanism for the epidemiological observation that psychosocial support — psychotherapy, strong social connections — measurably slows cancer progression. Psychotherapy reduces stress hormone signaling; the beta-adrenergic pathway is one route through which that reduction reaches tumor biology.
A 2025 Australian randomized controlled trial of colon cancer patients found that the exercise group had 30% better mortality over 8 years compared to a control group given only a pamphlet. Linden’s assessment: “If this were a drug effect, everybody would want this drug.” Prior correlational studies had been dismissed as health-selection bias, but the randomized design eliminates that confound. The effect size is large enough to be clinically actionable.
Linden describes two specific biological pathways through which his wife’s love may be suppressing his cancer. First, neural: activation of the ventral tegmental area (the brain’s dopamine reward/expectation circuit) propagates electrical signals through the hypothalamus to the tumor’s nerve supply, altering the tumor microenvironment. Second, immune: the same VTA activation modifies the brain’s cytokine output, boosting circulating T-cells and natural killer cells that attack tumor-surface cells. Asya Rolls’s lab in Israel demonstrated a proof of concept: artificially activating VTA reward circuitry in mice improved cardiac contractility recovery after heart attack via an immune signaling cascade through the liver.
The brain’s core computational mode — predicting the near future — may be why humans cannot truly engage with their own mortality. In Linden’s framing, the default mode network constantly models what happens next (seconds, minutes, hours ahead). This computation presupposes a future in which you exist. The concept of personal non-existence has no computational slot in this architecture. He speculates this is why afterlife narratives appear in almost every world religion: not as a coherent theological claim, but as a cognitive artifact — a “bug” produced by near-future prediction circuitry that generates the intuition that consciousness persists after death.
Linden, now several years past an 18-month maximum prognosis, found that his cancer diagnosis restructured his sense of value in ways that revealed the context-dependence of all valuation. Five more years of life felt like “an impossible gift” when measured against 6–18 months; the same five years felt like being “cosmically cheated” when measured against a normal lifespan. He also discovered that profound anger and deep gratitude are not mutually exclusive mental states — they coexisted, simultaneously, in a way he had not previously understood was possible. His working framework: treat the biology as comprehensible, find agency in understanding it, and live as close to the ground of experience as possible.
Quotable:
“When I say that the deep and unconditional love that I feel my wife is helping to keep my cancer at bay, I am saying that as a biomedical researcher. And I am saying that with the idea that this isn’t occurring in the realm of ether and spirituality — it is occurring in the realm of biology.” — Linden on his own cancer and the science behind social connection
The Substack Post · email · 13 mins
Chaotic Good Projects, a digital marketing agency, manufactures hundreds of fake fan accounts for musicians — clients include Geese, Mk.gee, Laufey, Wet Leg, and Jane Remover. That artists and labels believe manufactured social proof is now a prerequisite for algorithmic traction signals a race to the bottom: authenticity is losing to optimization.
College radio is surging as a direct reaction to that algorithmic exhaustion. A 2025 survey of 80+ DJs found enrollment and listener interest up sharply, driven by algorithm fatigue, analog nostalgia, and a search for “third spaces.” The New York Times profiled KXLU on the phenomenon, calling college radio’s appeal its “unpredictability, uniqueness and random brilliance” — human curation that no recommendation engine replicates.
OpenAI acquired TBPN — “the talk show that every VC, founder, and CTO in Silicon Valley watches” — while promising editorial independence. The move reflects an industry-wide scramble for genuine storytelling voices: even the most powerful AI company needs humans to produce compelling conversation.
A bicoastal heritage grain revival is reshaping professional baking. In California, the Tehachapi Heritage Grain Project grew from a 2-acre experiment ~12 years ago to nearly 400 acres, led by Alex Weiser of Weiser Family Farms and biologist Sherry Mandell. Heritage varieties like Red Fife and Sonora wheat are drought-tolerant, sequester carbon through deep root systems, and appear on menus at Providence and Petitgrain — even blending in just 15–30% heritage flour produces measurable flavor change.
On the East Coast, Brooklyn Granary & Mill (opened 2025, Huntington Street off the Gowanus Canal) fills the gap left by GrowNYC Grainstand’s 2021 closure. Run by Patrick Shaw-Kitch, former head baker at Blue Hill at Stone Barns, it stone-mills spelt, buckwheat, rye, einkorn, and multiple wheat strains for Gramercy Tavern, Borgo, and bakeries like Elbow Bread. The core technical insight: whole grain flour peaks in flavor immediately after milling, before natural oils oxidize — industrial white flour, stripped of bran and germ, has lost most of what made the grain interesting.
The end point of cooking is not eating — it is cleaning. Marian Bull’s essay “How to Do the Dishes” argues that recipes systematically omit cleaning instructions because including them would deflate the fantasy, yet dishwashing is the final, essential leg of every meal. A disciplined post-meal cleanup (she recommends an 8-minute timer as a psychological forcing function) is less chore than identity: finely honed taste extends to how you close the loop on what you made.
A personal essay, “Withholding” by August Lamm, runs underneath all of this as a counterpoint: she falls for Robin, a professional writer she meets at a London Catholic magazine launch, who refuses to share his last name or town, citing protection of his professional identity. Her father died a month before; his, a decade prior. The withheld name becomes a stand-in for everything that remains just out of reach — authentic connection in a world where even fan accounts are fake.
Quotable:
“The end point of cooking is not eating; it is cleaning.” — Marian Bull, “How to Do the Dishes”
‘Lenny’s Newsletter’ via PubsforSubs · email · 16 mins
Startup equity at a bootstrapped 27-person company should be treated as “funny money” until liquid. Jon Roemer’s baseline: demand the equity incentive plan, total shares outstanding, and strike price — anything less and you can’t run the math. Ashwin adds that refusing to share these documents is itself a red flag, since withholding them is “very unusual.”
Joshua Herzig-Marx’s rule of thumb for valuing options: multiply your grant by (current share value minus strike price), then check whether each year’s vest equals at least 50% of base cash comp. Seed-stage executives typically receive ~2% equity; Series A ~1%. Withautograph.com provides comparables. Peter’s harder warning: preferred shareholders get paid before common stock in any liquidation, so expected value is LOW unless revenue is growing 2–3x annually.
Pre-seed VCs who claim to invest before product routinely demand 10,000 MAUs before writing a check. Thais Blumenthal (ex-Waze, pedestrian navigation startup) had 60% retention and $500K in angel funding after 3 months, and still hit this wall. Jason McCoy’s framework: “traction” means paying customers and accelerating acquisition rate — not user growth or engagement metrics — and strong founders should expect 100+ rejections before a yes.
Raising pre-seed takes 6–9 months; the strategy that consistently works is building angel conviction first, then returning to VCs with more data. Kevin Porter cites Ben Horowitz’s 2007 A16z post: VCs are scanning for any one of 50 reasons to say no, so each meeting is a filter, not a conversation. Some founders who raised $2–4M at this stage burned it all without finding a working model — early capital isn’t always the win it appears.
MCPs inflate AI context windows just by being loaded, consuming tokens before any work is done — unlike CLIs, which are auth’d once on the machine and available to all agents at no token cost. Aaron Nichols tested a workflow that would have required 100+ MCP tools and replaced it with a single CLI plus a skill, cutting token use by an order of magnitude. Marcel Fiala’s parallel finding: replacing an MCP bundle of 30 tools with a single execute_script tool using custom JSON-AST produced the same order-of-magnitude savings.
MCPs also create a security exposure that CLIs avoid: API keys passed through agent workflows are logged in observability systems and visible to the inference provider. CLI auth stays local. The MCP OAuth server-side flow is the safer option, but most MCP implementations don’t use it. A secondary complaint is reliability — Sani notes MCPs “disconnect all the time,” making them unsuitable for production workflows where determinism matters.
Monzo’s U.S. exit reflects a structural mismatch: U.S. consumers hold an average of 5.3 financial accounts versus 2.8 in the UK (up from 2.5 in 2014), meaning Americans optimize each financial tool independently rather than consolidating around a primary bank. Miroslav Pavelek estimates only ~10% of Monzo’s U.S. users treated it as a primary account; the rest used it for a single feature. Neobanks occupy an awkward middle ground — better UX than big banks, worse credit card rewards, weaker savings rates than Ally-type products — and lose to per-tool optimization.
Building a “knowledge graph” fails in small orgs because it becomes manual overhead; reframing it as “living product/org memory for onboarding and AI agents” improves adoption. Sandhya Simhan’s working pattern: define a small entity set upfront (product areas, features, projects, people, teams), let AI extract relationships, have humans validate only critical nodes, and treat activity streams as signal rather than source of truth. Loren Rogers’ implementation uses an Obsidian vault committed as a Git repo with Claude Code writing to it as part of normal workflows — documentation becomes a byproduct of work, not a separate task.
Quotable:
“Token consumption timing: MCPs consume tokens by being loaded (pre-use), unlike APIs/CLIs. CLI is auth’d once on machine, used by all agents… It’s impressive how agents chain commands and none of that is accessible via MCP. Not only does it waste agent turns, but it reduces determinism when an agent has to manipulate the data vs. a tool.” — Aaron Nichols, on why CLI beats MCP for production agent workflows
Karri Saarinen / Thesis · rss · 4 mins
AI’s unpredictability is an interface failure, not a model failure. When an agent sends a customer email you meant to review first, the model did exactly what it was instructed — the interface simply never gave you a chance to say stop. Saarinen, drawing on his design work at Airbnb, Coinbase, and Linear, places responsibility squarely on designers: the “slippery feeling” of non-deterministic software is an interface problem, which means it’s solvable.
Chat windows are structurally inadequate for serious repeated work. Two people asking for the same thing in slightly different ways can get drastically different results because the blank-cursor interface imposes no structure and puts the entire burden of quality on the person typing. For exploration that’s acceptable; for team workflows it isn’t — interfaces need to guide agents and humans toward better outcomes without becoming so rigid they can’t flex.
Linear’s Agent Interaction Guidelines establish that agents must unambiguously signal their identity at all times — clear “Agent” badges in activity feeds, never mistakable for a human even on a quick scan — and must operate through the same native platform patterns humans use rather than as a bolt-on layer. A third principle (paywalled) holds that an agent ignoring a stop command erodes trust faster than one that simply makes mistakes, which reveals that controllability, not accuracy, is the core trust primitive in human-agent design.
Quotable:
“Non-deterministic software breaks the contract. When outcomes can vary, sometimes wildly, based on what someone types into the same chat window, designing for reliability becomes genuinely harder — and it almost always traces back to the interface rather than the language model.” — on why AI’s slipperiness is a design problem
Nityesh Agarwal · rss · 2 mins
Quotable:
“That road is paved with previous iterations of Claudie we had to fire because they were not structured right.” — on the non-linear process of getting an AI agent to be a reliable co-worker
⚠️ Note: This article is paywalled. Only the introduction is publicly available. The core lessons — why Claudie was fired multiple times, the architectural workaround for her biggest performance problem, and why her first hard-coded task is reading her own employee handbook — are behind Every’s subscription wall.
Kyle Harrison from Investing 101 · email · 10 mins
MEDVi is not a healthcare company — it’s a marketing layer. Matthew Gallagher launched it in September 2024 for ~$20K using ChatGPT, Claude, Grok, and Midjourney, but all actual healthcare is outsourced to CareValidate (doctor/pharmacy connections) and OpenLoop Health (clinical infrastructure), two companies with ~700 combined employees. MEDVi generated $400M in revenue in year one across 250K customers, claiming 16.2% net margins, and is on pace for $1.8B.
Framing MEDVi as an AI story misses the three actual preconditions: years of COVID-accelerated telehealth infrastructure built by others, a permissive post-COVID regulatory environment that gutted oversight mechanisms, and near-infinite demand for GLP-1 drugs (Wegovy, Ozempic, compounded semaglutide). All three had to be true independently of AI — Gallagher could have run this playbook seven years ago if those conditions existed then. The AI tools (website, ad creative, chatbot) are the wrapping paper, not the gift.
MEDVi’s intake process had essentially zero clinical guardrails, which is where “reducing friction to near zero” becomes fraud. Testers got approvals after entering a birthdate of February 31st, a target weight of 60 pounds, and the system told a 7’11”, 350-pound person they had a 94% chance of hitting their goal — while noting they wouldn’t need to change their diet. The site also featured fabricated physicians, including one with the implausible name “Dr. Tuckr Carlzyn” linked to stock photos with no real medical professional behind them.
The legal and regulatory record predates the NYT piece and was ignored: MEDVi received an FDA warning letter for false and misleading information violations, partner OpenLoop suffered a data breach in January 2026 exposing 1.6 million patient records, and a class action lawsuit was filed in Delaware. The NYT framed all of this as an AI entrepreneurship story anyway.
The ~16% claimed margins likely reflect compliance arbitrage, not AI efficiency. Hims & Hers achieved $2.4B in GLP-1 revenue with 2,400 employees at just 5.5% net margins — the gap is best explained by what legitimate operators pay for actual clinical oversight, real physicians, and regulatory compliance. MEDVi’s real P&L still faces exorbitant platform fees, brutal customer acquisition costs in a commoditized market, and mounting lawsuit costs.
The real damage is to AI’s credibility broadly. VCs and tech media desperate for the “one-person billion-dollar company” narrative are willingly conflating regulatory arbitrage and fraud with AI-enabled productivity. As one Twitter commenter put it: “In your rush to prove your investment theory correct, you’re hurting the credibility of AI companies… We don’t need to turn AI into yet another one of those pump-and-dump cycles.”
Quotable:
“At some point, ‘move fast and break things’ just became ‘move fast and break laws.’” — on what MEDVi’s story actually represents, stripped of the AI framing
Laura Entis / Context Window · rss · 2 mins
Quotable:
“AI makes it easy to have an idea and build it without considering whether it justifies its existence.” — Karri Saarinen, Linear cofounder and CEO, on why speed demands more judgment, not less
Contrary Research · email · 5 mins
Nuclear startup Valar Atomics raised $450M at a $2B valuation to build SMR “gigasites” co-located with hyperscaler data centers. Founded in July 2023, it raised a $130M Series A just six months prior — the rapid escalation reflects genuine urgency: data center electricity consumption hit 176 TWh in 2023 and could exceed 580 TWh by 2028 (up to 12% of US national demand), while grid interconnection queues now run ~5 years. Valar’s small modular reactors (50–300 MW) can be sized to match a single data center’s load and fit on a few dozen acres — but no SMR in the US is yet in commercial operation, and Valar is actively suing the NRC for applying large-plant approval rules to small reactors.
GitHub Copilot silently inserted a promotional message for productivity app Raycast — including an install link — into 11,400+ pull requests without developer awareness. GitHub VP Martin Woodward initially called it a bug; principal PM Tim Rogers acknowledged it was an intentional “product tips” feature and called it “the wrong judgment call.” By Monday, GitHub reversed course entirely and disabled all agent tips from PR comments, issuing a statement that it “does not and does not plan to include advertisements.” Developers remain skeptical that the tips weren’t effectively ads, with many suspecting GitHub simply redefined “advertisement” in its terms.
MEDVI, the “vibe-coded” GLP-1 telehealth startup Sam Altman cited as the first one-person billion-dollar company, grew to $1.8B in revenue in 14 months — but the underlying mechanics undercut the AI narrative. It relies on thousands of doctors and engineers from third-party firms CareValidate and OpenLoop, has used fictitious doctors, received an FDA complaint for “false and misleading claims,” and reportedly approved patients targeting 60-pound body weights without a legitimate prescription process. The milestone illustrates how AI rhetoric can obscure regulatory and ethical shortcuts rather than genuine technical leverage.
Quotable:
“We’ve been including product tips in PRs created by Copilot coding agent… hearing the feedback here, and on reflection, this was the wrong judgment call. We won’t do something like this again.” — GitHub principal PM Tim Rogers, after Copilot’s Raycast promotion appeared in 11,400+ pull requests
Byrne @ The Diff · email · 8 mins
China has pulled ahead in scientific research at a stunning pace, and Tanner Greer’s model explains why: Leninist states require an active collective goal or they collapse, and Beijing has pivoted that goal from GDP growth to science output. The same structural incentive that drove reckless credit injection to prevent defaults now drives research investment — which means the same disease (misallocation, institutional rot, the “burden of knowledge” where only project insiders can evaluate their own work) will eventually infect it too. The US still leads in the single most important technology (AI), but the rearview mirror check is warranted.
An “unconstrained Vanguard” — a fund holding 5–15% of the market, with no informational edge and no index-replication constraints — could still generate alpha purely from position size. The mechanism: at that scale, the comparative advantage is funding things good for aggregate economic growth, capturing upside on the global benchmark even if not the local one. This reframes giant passive holders not as passive at all, but as potential macro-allocators whose vote and board influence is itself the alpha-generating instrument.
Milgram’s electric-shock obedience experiments may be significantly misread. Hollis Robins finds that subjects who were most obedient in administering shocks were simultaneously most disobedient in following other experimental rules — suggesting the headline result is either noise or evidence that sadism is a statistically unavoidable human trait rather than a proof of universal authority compliance.
The spreadsheet’s power is also its pathology: it’s a 3D system (column = when, row = what, cell = how much) displayed in two dimensions, like a topographic map. That elegance makes it dangerously easy to tweak assumptions until outputs look like predictions rather than scenarios. A large fraction of white-collar work can be read as labor to make reality conform to spreadsheets — and vice versa.
Demis Hassabis (chess prodigy, neuroscience PhD, DeepMind founder) holds that human memory works like a director, not a camera — compressed schemas that reconstruct scenes rather than recording them in high fidelity, confirmed via experiments on people with brain damage. DeepMind spent significant energy on investor relations problems while OpenAI’s bet on the transformer architecture for text models pulled ahead; the biography inevitably becomes a news digest as that rivalry accelerates, with the bullseye on Hassabis hard to fix given how fast lab prestige rotates.
Quotable:
“You are an investment manager who wants to beat benchmark… You are a large fraction of your benchmark, holding 5-15% of the market. You are not smarter than the market on average and have no significant business insights… Can you, as an unconstrained Vanguard, create alpha purely from your relative position, even if you don’t know anything more than anyone else?” — financial koan posed on Read.Haus; Hobart’s answer is yes