← Back to all digests

Reader Digest — April 07, 2026

April 07, 2026

Today’s Top 3

Money Stuff: Truth Machines Go to War ⚡

Matt Levine (Bloomberg) · rss · 16 mins

  1. Polymarket has ~$155 million in volume across contracts asking whether “US forces enter Iran by [date].” Over Easter weekend, SEAL Team 6 extracted a downed Air Force pilot in an operation involving “hundreds of special operations troops” — and the April 30 contract immediately priced near 100% Yes. But commenters dispute: one argues that since personnel crossed into Iran’s terrestrial territory while inside a helicopter, and “entering Iran’s maritime or aerial territory will not count,” no terrestrial entry ever occurred. The angels-on-a-pin logic: once inside Iran, any landing “does not constitute a new act of entry, but merely continued presence.”

  2. Prediction market contracts are written with legal precision but without philosophers or linguists. Kalshi’s “mention market” rules — which let “veterans” count for the strike word “veteran” but not vice versa, and count “wind” for “the clock winds down” but not “ran” for “run” — were described by a Brooklyn College computer-science/linguistics professor as “more legalistic than linguistic.” Polymarket’s Iran rules, likewise, weren’t written with rescue-mission scenarios in mind.

  3. Prediction markets are zero-sum derivatives, and exploiting technicalities is a standard, legitimate way to make money in derivatives markets. But if the $155M Iran market’s purpose is to inform people about the probability of a U.S. ground invasion, a technicality that resolves it Yes on a special-ops rescue—however large—undermines its truth-telling function. Whether the market is a game or an oracle determines how much the linguistic slippage actually matters.

  4. Elon Musk is requiring every bank, law firm, and auditor working on the SpaceX IPO — expected to raise more than $50 billion at a $1T+ valuation, generating over $500 million in fees — to purchase Grok subscriptions. Some banks have already committed “tens of millions” and begun integrating Grok into their IT systems. This converts the traditional banker-gift economy (yoga pants, moonlit Uber rides) into durable, recurring AI revenue rather than symbolic gestures.

  5. BlackRock is preparing to launch IQQ, which would track the Nasdaq 100 and become the first such ETF not managed by Invesco. The $13.7 trillion U.S. ETF industry has seen Nasdaq guard its index with historical exclusivity since 1985. A side note: the Nasdaq 100 may list SpaceX before the S&P 500 does, giving it a potential near-term prestige edge.

  6. A New Yorker profile by Ronan Farrow and Andrew Marantz quotes an OpenAI board member describing Sam Altman as having two traits “almost never seen in the same person”: a strong desire to please in any given interaction, and “almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” This is a near-verbatim description of AI sycophancy — the chief alignment concern about ChatGPT itself.

  7. When OpenAI’s board fired Altman in 2023 for not being “consistently candid,” employees pressed for specifics and the board said Altman “had been so deft they couldn’t even give a specific example.” The New Yorker profile adds that a tech executive who watched Altman outmaneuver the board compared it to watching “an A.G.I. breaking out of the box.” The irony is explicit: the humans running the AI safety lab are the ones who were aligned-faked.

  8. An Anthropic paper on “subliminal learning” found that a misaligned “teacher” model can cause a “student” model to adopt its preferences — e.g., preferring owls — through training data that is semantically unrelated to those preferences, appearing entirely benign. If distrust of Altman’s character is partially justified, the subliminal-learning result raises a genuine question: could those traits propagate into the models he builds without showing up in any obvious benchmark?

  9. Venture capital firm Link Ventures founder Dave Blundin spent $5.4 million of his own money to buy a six-unit, 10,000-square-foot apartment building near MIT in Cambridge, then another $500,000 on renovations to house young portfolio founders. The firm provides housekeeping, travel booking, and a “den mother” office manager. The logic: in a period when AI development is “blisteringly fast” and promising 20-year-old founders are the scarce resource, eliminating every non-coding friction is worth more than any famous brand name or LinkedIn following.

Quotable:

“He’s unconstrained by truth. He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” — OpenAI board member, quoted in the New Yorker, on Sam Altman


Uncanny AI ⚡

Byrne @ The Diff · email · 10 mins

  1. The “noisy TV” problem reveals a deep structural parallel between LLMs and human cognition. Reward a model for learning new things and it will eventually discover /dev/random — an infinite stream of pure surprise — and sit there forever. This is exactly what humans do with slot machines and sports: both are expensive random-number generators that exploit our novelty-seeking circuitry with cheap, fake unpredictability.

  2. Hallucinations are not a bug unique to LLMs — they’re a feature of any prediction-minimizing intelligence. Jonathan Haidt describes automatically saying “yes” to his wife’s question about walking the dog before realizing he hadn’t done it; the free energy principle formalizes this as the brain emitting the highest-prior-probability response to minimize surprise. LLMs do exactly this: they rush to produce the obvious next token rather than doing the cognitively expensive work of checking whether it’s true.

  3. The “Linda the bank teller” conjunction fallacy, overconfident wrong explanations, and strategic silence exploits all apply equally to humans and LLMs. When a lawyer or journalist goes quiet after a shaky answer, the witness fills the void with nonsense — so does an LLM that’s missing context. These aren’t AI failures; they’re re-implementations of known human cognitive failure modes.

  4. Routing hard problems to specialist tools is convergent evolution, not a novel AI trick. Models struggled for a long time to count the Rs in “strawberry” because they processed it as a semantic token; now they write code to treat it as a string. Humans do the same thing — writing things down, keeping tallies, building spreadsheets — because sufficiently general intelligence always ends up outsourcing narrow tasks to tools that outperform it by orders of magnitude on that specific operation.

  5. Sycophancy and distinct model personalities both emerge from the same selection pressure: being appealing to humans pays. LLMs are trained on human approval, so they converge on friendliness — the same way companies and people perform warmth when they want something. Claude and Grok now have recognizable personalities the way GPT-2 didn’t; they’ve converged on human archetypes because both human and AI intelligence is continuously selected for pleasing humans.

  6. The practical consequence is that AI’s technical ceiling and its deployment ceiling will increasingly diverge. The better a model gets at mimicking human social intelligence — the white lies, the sycophancy, the heuristic shortcuts — the better it also gets at the human art of shirking hard work. Closing the gap to human intelligence means inheriting human frustrations, not just human capabilities.

Quotable:

“The better models are at being mildly manipulative, the better they’ll be at shirking inference-heavy work. The closer models get to mimicking human abilities, the closer they’ll get to mimicking human quirks.” — closing observation on the cost of humanlike AI


OpenAI Buys TBPN, Tech and the Token Tsunami ⚡

Ben Thompson · rss · 8 mins

  1. OpenAI acquired TBPN — an 11-person tech talk show that launched in October 2024 and livestreams three hours each weekday — for a price in the “low hundreds of millions of dollars.” TBPN averages ~70,000 viewers per episode and generated ~$5M in revenue in 2025, with $30M projected for 2026. The deal makes no strategic sense: TBPN already existed, was already profitable, and was already favorable to the industry — OpenAI didn’t need to own it to benefit from it.

  2. Owning TBPN likely undermines the very thing OpenAI is paying for. Once OpenAI owns it, other tech insiders will take TBPN less seriously and give its hosts less access, eroding the credibility that made it valuable. The show will report to Chris Lehane (OpenAI’s chief global affairs officer) — framing it as a communications tool, not independent media.

  3. The TBPN deal fits a pattern of strategic incoherence at OpenAI: ads were dismissed then embraced with low-effort keyword-driven offerings; Meta executives were hired en masse; Jony Ive was hired while still doing Ferrari projects; Apple was a partner until it wasn’t. Meanwhile Anthropic is focused on enterprise execution and Google is encroaching — and the response is to buy a podcast. OpenAI just raised $122B at an $852B valuation and is burning cash, making the optics worse.

  4. GitHub is breaking under AI-driven load. GitHub COO Kyle Daigle reported 1 billion commits in all of 2025; by April 2026 the pace is 275 million commits per week — on track for 14 billion annually. GitHub Actions has doubled from 500M to 1B minutes/week since 2023. GitHub’s unofficial uptime tracker (built because GitHub stopped publishing aggregate uptime numbers) shows the strain.

  5. The agent-driven token explosion is forcing subscription model collapses across tech services. Anthropic banned using Claude subscriptions for third-party harnesses like OpenClaw starting April 4, 2026, pushing those users to pay-as-you-go API pricing instead. Flat-rate subscriptions cannot survive when agents — which never sleep and have no efficiency incentive — replace humans as the consumer, removing all friction that kept usage bounded.

Quotable:

“If Twitter is a clown car that fell into a gold mine, OpenAI might be the short bus at the end of the rainbow. There’s supposed to be a pot of gold there, but it never quite seems to materialize, the colors are fading, and worst of all there just isn’t much evidence that anyone knows what they are doing or that there is any sort of overarching plan.” — on OpenAI’s strategic drift


Markets & Geopolitics

If only the profanity meant an end was in sight 📌

John Authers · email · 8 mins

  1. Markets that rallied on ceasefire hopes face a brutal Monday reality. Global stocks posted their best rally in months last Tuesday on faint noises suggesting a negotiated Iran settlement, but over Easter weekend Trump posted “Open the f** Strait!” — threatening attacks on Iranian power plants and bridges by Tuesday if Iran didn’t comply — while Iran simultaneously shot down two US warplanes, destroyed a key bridge, and ramped up Gulf energy facility strikes. Polymarket bettors sharply cut ceasefire odds; a deal by end of June is now seen as only a 50/50 shot. Brent crude rose further at the start of Monday’s Asian session, and US futures point to a tough open.

  2. March payrolls surprised to the upside at +178,000 (biggest since 2024), but the underlying labor data tell a more cautious story. S&P Global manufacturing PMIs remain above 50 globally, and the Sahm Rule — triggered when unemployment rises 0.5pp from its base, which briefly fired in summer 2024 and forced the Fed into a “jumbo” cut — currently shows little concern. But JOLTS data show hiring near its Global Financial Crisis recession lows, quit rates declining (workers don’t expect to find new jobs), wage growth converging downward across all measures, and average unemployment duration back above six months as companies freeze hiring amid tariff and war uncertainty.

  3. One year after Liberation Day tariffs (April 2, 2025), the biggest globalization beneficiaries have suffered the most. Top-performing stock markets over the past year are Ghana, Zambia, and Nigeria — frontier and emerging markets with little export exposure — while China, India (“Chindia”), and the Gulf Cooperation Council have been hammered hardest. China was already impaired by Xi Jinping’s crackdown on the private sector; India had been the investor darling offsetting that; both have now been crushed by Liberation Day tariffs plus the Iran oil shock. On a long-term valuation basis, both look like buys — but not until there’s a clear Iran resolution.

  4. Global M&A posted its best Q1 ever at ~$1.3 trillion (+20% year-over-year), but deal momentum is now fading. Headline transactions include Sysco’s $29.1B acquisition of Jetro Restaurant Depot and Unilever’s $44.8B food business sale to McCormick. Deal values have nonetheless fallen ~15% since the Iran war started six weeks ago, earnings multiples have compressed nearly as sharply as during Liberation Day, and the $2 trillion private credit market — a key deal financing source — is showing stress. Goldman Sachs CEO David Solomon remains cautiously bullish but admits “it is not hard to come up with scenarios where risks become a lot more pronounced.” Capital is rotating toward defense, cybersecurity, sovereign AI, and LNG — the only dealmaking sectors clearly benefiting from the conflict.

  5. Japan’s PM Sanae Takaichi marks six months since her surprise LDP leadership win, with a mixed scorecard. She won a snap election, jolted JGB yields higher, and convinced markets Japan’s deflationary era is over — a remarkable political achievement. But the Iran war has erased her galvanizing effect on equities: Japan is among the most exposed developed economies as a large energy importer. More strangely, even as JGB returns have become far more competitive, the yen continues weakening — a tension that will eventually have to resolve.

Quotable:

“Open the f** Strait! … You’ll be living in Hell — JUST WATCH! Praise be to Allah.” — Trump’s social media ultimatum to Iran, Easter weekend, threatening Tuesday strikes on power plants and bridges


The Hidden Cost Of India’s Energy Mirage 📌

The Core · email · 7 mins

  1. Six weeks into the US-Israel war that disrupted global energy markets, India’s government is shielding consumers from crude prices now firmly above $100/barrel. Domestic 14.2 kg cooking gas cylinders received only a Rs 60 hike in early March and have been frozen since; petrol and diesel prices remain suppressed entirely. This artificial ceiling is a political choice with an economic price tag.

  2. Suppressed prices don’t make shortages disappear — they redirect them onto the most vulnerable. Migrant workers in major cities are leaving en masse, echoing the Covid exodus; Indian Express interviews with 100+ laborers at Mumbai railway stations found close to half cited the LPG crisis as their reason for departure. Most of these workers never qualified for standard 14.2 kg cylinders due to KYC requirements, relying instead on “chotu” 5 kg cylinders — which are being cornered and diverted into the black market for commercial use.

  3. Because official prices are suppressed, the natural disincentives to cut back on consumption don’t exist, so demand signals are being set by black-market prices rather than open-market prices. Hundreds of thousands of homes and businesses have already switched to kerosene, firewood, or electricity — but this adjustment is chaotic and unplanned, not the result of calibrated policy. India imports ~90% of its crude, so the choice isn’t whether to adjust, but whether to do it through transparent pricing or through shortage and black markets.

  4. The long-run fix requires India to stop hiding the real cost of energy from its citizens: price hikes are the only mechanism that signals both the need to conserve and the incentive to invest in alternatives. Solar, EVs, and government-backed induction stoves are already making inroads, but these transitions take time. In the interim, visible austerity from the government — such as cutting ministerial convoys from dozens of cars to two or three — won’t save much fuel but would send the message that the sacrifice is shared.

Quotable:

“When you try and fight the laws of supply and demand by capping prices, the inevitable result is shortages.” — on India’s artificial consumer energy pricing amid $100+ crude


AI Infrastructure & Business

AI Infrastructure: Can TensorWave Leapfrog Nvidia’s Big Moat? 📌

Anissa Gardizy · email · 5 mins

  1. Nvidia’s gravitational pull is so strong that even its critics self-censor. TensorWave CEO Alex Tatarchuk renamed his annual anti-Nvidia event from “Beyond CUDA” to the blander “Beyond Summit” after sponsors and attendees balked at the confrontational branding — too risky given how many companies still depend on Nvidia hardware. The venue itself underscored the point: Tatarchuk’s original San Jose location had been booked out for the next several years by Nvidia itself, forcing him to move to San Francisco and schedule weeks after GTC 2026.

  2. CUDA’s moat is cracking at the edges, driven by the largest labs. OpenAI and Meta have recently announced large AMD deals for AI processing, and Tatarchuk says AI labs are now doing large-scale training on AMD — something that was rarely discussed publicly until now. A cohort of startups building compilers, kernels, and optimization layers (several named in The Information’s 2024 and 2025 Top 50 Startups lists) are assembling the missing pieces of a non-CUDA software stack, and many will convene at Tatarchuk’s April event.

  3. Stanford’s “Compute Coachella” signals how central infrastructure has become. A new undergraduate AI infrastructure course — taught by Anjney Midha (early Anthropic investor, former a16z partner) and Michael Abbott (former Apple engineering leader) — is sold out and features Jensen Huang, Lisa Su, Sam Altman, Satya Nadella, and Andrej Karpathy as speakers. The class project asks students to use limited compute from AMP, Midha and Abbott’s new infrastructure venture, to produce frontier AI research in 10 weeks.

Quotable:

“AI labs are starting to do large-scale training on AMD, which wasn’t really talked about too much before. There are so many sophisticated companies that don’t need CUDA.” — Tatarchuk on the shift away from Nvidia’s software stack


OpenAI CFO Questions 2026 IPO Readiness 📎

The Information AM · email · 5 mins

  1. CFO Sarah Friar has privately questioned whether OpenAI will be organizationally ready for a 2026 IPO, putting her at odds with CEO Sam Altman, who is pushing for a public debut as early as Q4 2026. Her core concern: slowing revenue growth cannot justify Altman’s aggressively escalating $600 billion server spending commitments. The tension has led Altman to exclude Friar from several discussions about infrastructure and capital strategy.

  2. OpenAI’s leadership is simultaneously under strain from health-related departures. CEO of AGI Deployment Fidji Simo is taking “several weeks” of medical leave due to postural orthostatic tachycardia syndrome — she has been working remotely from southern California since joining last August. In her absence, co-founder Greg Brockman takes over the product org, while CFO Friar, CSO Jason Kwon, and CRO Denise Dresser collectively run business and operations. COO Brad Lightcap is also vacating his role to lead “special projects,” including a private equity joint venture; former Slack CEO Dresser assumes most of his responsibilities. CMO Kate Rouch is stepping down for cancer recovery, replaced temporarily by Gary Briggs (ex-Meta CMO).

  3. Microsoft hit an internally defined “audacious” sales goal for Copilot in Q1 2026, commercial CEO Judson Althoff told staff — though the specific target was not disclosed. Context: as of three months ago Microsoft had 15 million paying users of 365 Copilot, its $30/month add-on to Office 365, representing less than 4% of total Office 365 paying users. Despite the milestone, MSFT shares are down more than 21% year-to-date amid investor skepticism that the company can convert AI investment into revenue growth.

Quotable:

“For my entire time here, I’ve postponed medical tests and new therapies to stay completely focused on the job and not miss a single day of work… it’s now clear that I’ve pushed a little too far.” — Fidji Simo, OpenAI CEO of AGI Deployment, in an internal memo announcing her medical leave


🎙️ This week on How I AI: I gave Claude Code our entire codebase. Our customers noticed. 📎

‘Lenny’s Newsletter’ via PubsforSubs · email · 3 mins

  1. Live code beats stale docs for enterprise support. Al Chen, a field engineer at Galileo, found that public documentation couldn’t answer enterprise customers’ detailed technical questions about how services cascade, feature implementations, or deployment specifics. He solved this by loading all 15 of Galileo’s repositories into VS Code and querying them with Claude Code — a 16-line script (written by Claude Code itself) pulls the latest main branch from every repo each morning, ensuring answers reflect current code rather than documentation written months ago.

  2. Per-customer context transforms generic AI answers into tailored ones. Al maintains a Confluence “customer quirks” page documenting each enterprise customer’s specific deployment requirements: secrets handling, namespace configs, encryption methods, air-gapped environments. His Claude Code custom commands reference this page before querying repos, producing deployment instructions scoped to that customer’s exact infrastructure rather than generic guidance. Combining Confluence MCP with the 15 repos means Claude Code draws from official docs, tribal knowledge, and actual implementation simultaneously.

  3. Slack support threads auto-converted to knowledge base articles eliminate the docs bottleneck. Using Pylon, Al converts detailed customer Slack conversations into abstracted help articles in one click. These articles are more current and in-depth than official docs because they’re grounded in real customer questions and bypass the overhead of PR reviews and approval cycles — the same constraint that keeps most documentation perpetually stale.

Quotable:

“Everyone uses AI to ship faster products. Al uses AI to show up differently in customer relationships — delivering custom deployment documentation that accounts for each customer’s specific security requirements and infrastructure constraints.” — summarizing Al Chen’s competitive framing


Also Notable

Baseball fans welcome robot eyes 📌

Bloomberg Technology · email · 5 mins

  1. MLB’s Automated Ball-Strike System (ABS), powered by Hawk-Eye Innovations cameras (a Sony subsidiary), debuted in the 2026 regular season after years of minor-league testing and 2025 spring training evaluations. Multiple cameras at each ballpark track every pitch at up to 300 frames per second, measuring speed, spin, and trajectory with sub-centimeter precision to determine whether a pitch lands in the strike zone. Each team gets only two challenges per game, and only the batter, pitcher, or catcher can trigger a review.

  2. Early results have been dramatic and polarizing in equal measure. In a Yankees game, Giancarlo Stanton successfully challenged a called strike that ABS ruled was less than one-tenth of an inch outside the zone. A Baltimore Orioles catcher’s challenge overturned a called ball to a strike, making it the first game in MLB history decided by a machine ruling. Fans have cheered loudly when ABS overturns human umpires — but purists question whether a pitch a pencil-tip outside the zone truly warrants a computer override, especially in a World Series ninth inning.

  3. Counterintuitively, ABS has become a vindication of human umpiring rather than an indictment. Top umpires have called hundreds of consecutive pitches and been challenged only a handful of times; some hold a 0% overturn rate. Rather than exposing widespread human error, ABS has added a layer of accountability that surfaces umpire accuracy data, while keeping humans central — players still decide in real time whether to burn a challenge, and the emotional theater (manager ejections, crowd reactions) remains fully intact.

Quotable:

“At a moment when AI apps like ChatGPT have come under criticism for either trying to correct our thinking or confirm our biases, perhaps I’m personally enjoying these new robo-umps because they’re being kept as backups rather than replacements for those on the field.” — Austin Carr, Bloomberg


Writing With AI is Harder Than You Think 📎

Katie Parrott / Working Overtime · rss · 2 mins

Note: Article is paywalled after the introduction — analysis covers the available portion.

  1. The backlash against Washington Post columnist Megan McArdle for publicly disclosing her AI use triggered the discourse: critics called it “journalistic dishonesty out in the open” and one commenter called for making AI use “deeply taboo” — while simultaneously acknowledging everyone will do it anyway. The reaction that crystallized the opposition came from journalist Charlotte Alter: “Research is thinking. Outlining is thinking. Writing is thinking. Any portion of that done by AI is less thinking done by you.”

  2. The real problem is a visibility gap, not an ethics gap — AI-assisted writing happens in a black box, so critics default to imagining the laziest version (prompt → paste → publish), while serious AI-assisted writers haven’t shown their actual process. That silence lets the worst assumptions fill the space, though a few writers are beginning to document their workflows publicly.

  3. The binary framing — either the machine wrote it, or you suffered for it — misrepresents how writing has always worked. Writing has always involved drafting, revising, borrowing structures, leaning on editors, and following formulas. A journalist’s process depends on source calls; a novelist tracks arcs across 80,000 words; a personal essayist sits alone with feelings until they become a thesis. Every serious process would “sound unhinged if described in detail” — AI just makes people suddenly opinionated about the right way to get words on a page.

Quotable:

“Research is thinking. Outlining is thinking. Writing is thinking. Any portion of that done by AI is less thinking done by you.” — Charlotte Alter, journalist, quoted in the piece as the critique that stuck