April 04, 2026
Clouded Judgement by Jamin Ball · email · 9 mins
Three major supply chain attacks hit within 48 hours: Claude Code’s source code (500,000 lines + unreleased feature roadmap) leaked via a misconfigured npm package; Mercor was breached through a compromised LiteLLM dependency with Lapsus$ claiming 4TB of stolen data including source code, databases, and contractor interviews; and the axios npm package — 100 million weekly downloads — was hijacked by North Korean state actors who injected a cross-platform remote access trojan. The common thread is that a single misconfigured file or compromised maintainer is enough to unravel the entire software supply chain.
AI agents operating autonomously create a trust problem orders of magnitude larger than today’s software supply chain. When an agent is booking flights, executing trades, or wiring $50,000, the implicit assumption is that it’s running the model it claims, on the inputs it received, without mid-execution manipulation — but there’s currently no cryptographic basis for any of that trust. A compromised model with tampered weights could silently make malicious decisions across every critical enterprise system.
Zero knowledge proofs (ZK proofs) allow one party to prove a statement is true without revealing the underlying information — classically illustrated as proving two balls are different colors to a colorblind person by consistently identifying whether they were swapped, without them ever seeing the colors. In crypto, ZK proofs enable privacy-preserving transactions and ZK rollups (bundling hundreds of transactions into one on-chain proof), but they’ve stayed trapped inside crypto because generating a proof historically required 100x to 1,000,000x the compute of just running the original computation — a non-starter for millisecond AI inference.
Recent breakthroughs are collapsing the overhead gap: compute overhead has dropped from 1,000,000x toward 10,000x and keeps falling, image classification models can now be proven in seconds, and recursive SNARKs (“folding schemes”) have compressed proof sizes from gigabytes down to under 100 kilobytes. GPU acceleration, dedicated ZK chips, and improved algorithms are converging simultaneously, making ZKML (zero knowledge machine learning) viable for a growing set of practical applications even if real-time proof generation for every inference isn’t here yet.
ZKML unlocks five distinct trust guarantees that don’t exist today: (1) Model integrity — a bank can prove to regulators it used only approved model weights for credit decisions without exposing the proprietary model; (2) Input integrity — the full chain of input→model→output is provable, detecting pipeline injection attacks; (3) Agent verification — every autonomous agent action gets a cryptographic receipt, auditable without re-running the computation; (4) Privacy-preserving AI — neither the user nor the AI provider sees the other’s sensitive data or proprietary model, unblocking AI in healthcare, finance, and defense; (5) Agent-to-agent trust — as purchasing agents negotiate with supplier agents, each can cryptographically prove its identity, logic, and outputs to the other, creating a fundamentally new trust primitive.
Every major platform shift required a corresponding trust layer that became foundational infrastructure: SSL/TLS for the internet, app store review and sandboxing for mobile, IAM and zero trust networking for cloud. NIST launched an AI Agent Standards Initiative in February specifically targeting security and interoperability for autonomous agents; Microsoft just unveiled a Zero Trust for AI framework. ZKML is the leading candidate for the agentic era’s trust layer, and the more cryptographically verifiable agents become, the more autonomy they can be granted — and the more value they can create.
Quotable:
“Trust has always been the bottleneck for autonomy. The more we trust agents, the more autonomy we give them. The more autonomy they have, the more value they create. Zero knowledge proofs could be what unlocks that loop.” — closing thesis
Benn Stancil · email · 9 mins
Coding is a “tidy” domain to train AI; business decision-making is not. You can lock a model in a room with coding assignments, run it through test/fail cycles, and get clean training signal — software either works or it breaks, producing a strong gradient. OpenAI’s Fidji Simo announced a pivot to “coding and business users,” but general business problems are fundamentally different: “Tell me how to turn around my struggling business” is uncontained, drawing on customer Slack messages, a viral TikTok, a random geopolitical event. You can’t sandbox all of reality.
Allbirds — sold for $39M after a decade of operations — is a ready-made enterprise AI training gym. Allbirds operated from 2015–2025, sold over $1 billion in shoes, ran 60 global stores, and accumulated a complete corporate data universe: emails, Slack, CRMs, ERPs, ad campaigns, financial statements, SEC filings, lawsuits, and slide decks. For an AI lab that wants to train agents on real enterprise workflows — instead of inventing millions of fake product orders and fake customer interactions — that archive is an extraordinarily dense sandbox. The alternative is to “invent the universe.”
For OpenAI, $39M is arithmetically irrelevant — it’s what they spend every 16 hours. OpenAI made $13B in revenue and burned $8B in 2025, implying ~$57M per day in spend. The $39M Allbirds price tag is 0.004% of their $852B valuation. If even a tiny improvement in enterprise agent training is possible, Pascal’s Wager logic applies: when you’re betting on replacing all knowledge work, almost any cost that raises success probability by some small percentage is justified.
Failed businesses may have acquired a new kind of asset value as AI training data. In the old model, companies were worth money because they made money; software was worth money because it did something useful. Now AI labs pay for code repositories not to run the software but to feed models. By the same logic, a decade of real corporate operations — including how not to run a business — could be worth more to an AI lab than to a conventional acquirer. Distressed companies are, in effect, labeled datasets of organizational behavior.
Block is restructuring itself around an AI “world model” that replaces management hierarchy. Block published a manifesto arguing that the Roman Army’s command problem — coordinating thousands across vast distances — was solved by hierarchy, but AI can now replace that coordination layer entirely. Their proposed model: intelligence lives in a continuously updated AI system; human employees sit at “the edge,” feeding facts in and executing the system’s will in the physical world. The org chart inverts — the AI coordinates, humans are peripheral.
The unanswered question is who becomes the intern when AI has better ideas than you. Block frames this as empowerment (“the edge is where the action is”), but there’s a fine line between a system that coordinates and one that decides. When an AI consistently asks better questions, surfaces options you missed, and you find yourself preferring when it drives — the human is functionally the intern carrying out the AI executive’s agenda. Interns, as Stancil notes, also “reach into places executives don’t go” — but it’s worth asking whether that demotion is progress or just a polite reframe.
Quotable:
“To teach a robot to be an engineer, you need to write a computer science test. To teach a robot to be an employee, you have to first invent the universe.” — on why enterprise AI is a fundamentally harder training problem than coding AI
Eric Markowitz · email · 10 mins
A local dry cleaner named Howard survived the pandemic not through grit but through two decades of relationship-building. In 2020, when offices shut and dress codes collapsed overnight, his long-time customers voluntarily brought in dusty curtains, area rugs, and old coats they didn’t actually need cleaned — they came to keep him afloat. Competing shops in the same neighborhood that were equally “hardworking” are now gone.
Resilience behaves like an ecosystem property, not a personal virtue — and modern business has spent 40 years destroying the conditions that make it possible. A forest survives pests and clearings through interconnected fungal networks, redundancy, and slack; “just-in-time” supply chains and “lean” headcounts eliminate all three. Optimizing for a perfectly sunny day guarantees the system shatters when it rains.
Beretta, founded in 1526, refused to lay off its master craftsmen after WWII when military contracts evaporated — instead pivoting to making cars not as a strategic bet but purely as a stopgap to keep people employed. Those craftsmen carried 30 years of metallurgical knowledge that couldn’t be rehired; when demand for firearms returned, Beretta’s expertise was fully intact. Mass layoffs damage an organization’s nervous system: survivors become defensive, stop sharing bad news, and the unwritten institutional memory of how things actually get done walks out the door.
Slow growth produces structural integrity; fast growth produces fragility. The bristlecone pine Methuselah, nearly 5,000 years old in California’s White Mountains, grows only millimeters per year — its wood is so dense pests can’t bore in and so resinous it’s rot-proof. Quibi raised $1.75 billion from Sequoia and NBCUniversal, projected 7.4 million users in year one, hit barely 500,000 in six months, and shut down in December 2020. The long-lived companies studied for Outlast expand at “natural speed” — a pace where internal capabilities can keep up with external complexity — prioritizing coherence (the ability to coordinate under stress) over size.
Kongō Gumi, a Japanese construction company founded in 578 A.D. and still operating after 1,400 years, treats its Buddhist temples as never finished — roofs replaced every 60 years, beams inspected every decade, in permanent stewardship. The law of entropy in old systems is non-linear: neglect a 100-year-old house’s deck and the decline is not gradual but exponential. Resilient organizations use good times to pressure-test assumptions and hunt for micro-fractures while they still have the cash and psychological bandwidth to fix them; during booms, most organizations let culture slip, vendor relationships go transactional, and middle manager training stop — precisely when maintenance is cheapest.
Quotable:
“The person who survives the abyss is the one with a dozen people standing at the top holding a rope.” — on why resilience is a collective achievement, not the hero’s journey
Bloomberg Technology · email · 4 mins
Google’s employee activism on military AI has collapsed from mass protest to symbolic gesture. In 2018, more than 4,000 Googlers signed a petition against Project Maven — a Pentagon program using AI to analyze drone war footage — forcing the company to drop the contract. Today, Google has reversed course entirely: it removed anti-weapons language from its AI principles in 2024, and in March 2026 a senior Pentagon official confirmed Google would deploy AI agents for routine military work. The current protest consists of roughly a dozen employees, including chief scientist Jeff Dean, signing an amicus brief in support of rival Anthropic’s lawsuit against the Defense Department — a far quieter act with no leverage over Google itself.
Post-layoff fear has neutered internal dissent, and company structure limits what employees can even know. The Alphabet Workers Union’s Alec McGinnis explicitly links job insecurity to reduced organizing capacity, saying the union is pushing for buyout-before-layoff policies specifically to restore workers’ willingness to push back on Pentagon contracts. Margaret Mitchell — who co-led Google’s ethical AI team before being fired and is now chief ethics scientist at Hugging Face — notes that a siloed company handling ever more sensitive classified client work makes it structurally impossible for most employees to learn what their technology is actually being used for, let alone change it.
Quotable:
“The bottom line is that no matter what Googlers might think, they might not be able to shape what the company does more broadly in terms of these sort of geopolitical issues. Your job as a Googler is just to figure out how much you want to move forward with Google’s vision.” — Margaret Mitchell, former co-lead of Google’s ethical AI team, now chief ethics scientist at Hugging Face
The Information AM · email · 7 mins
Blackstone, the world’s largest alternative asset manager, agreed to acquire a 49% stake in Rowan Digital Infrastructure—a five-year-old Denver data center developer—at a valuation of roughly $3.8 billion excluding debt. Blackstone is expected to hold strong control rights despite the minority stake. The deal came together after rival bidder Sixth Street, which had been in advanced talks with Rowan’s owner Quinbrook Infrastructure Partners, backed out last month.
OpenAI acquired daily tech podcast TBPN—reportedly on track for $60 million in revenue this year—with co-hosts John Coogan and Jordi Hays reporting to Chief Global Affairs Officer Chris Lehane. The acquisition was driven by Fidji Simo (OpenAI’s CEO of AGI Deployment), who saw it as a PR vehicle after several public relations missteps and the departure of communications chief Hannah Wong. The deal is internally contradictory: Simo has championed cutting “side quests,” yet TBPN surprised many OpenAI employees enough that some thought it was an April Fools’ joke—and it follows OpenAI’s own pledge just weeks earlier to do fewer projects outside its core business.
The Pentagon appealed a federal judge’s order pausing its designation of Anthropic as a supply chain risk. Judge Rita Lin had granted a preliminary injunction, calling the designation “likely both contrary to law and arbitrary and capricious”—the DoD had declared Anthropic a risk after contract talks broke down over Anthropic’s refusal to strip AI safeguards. A second, separate supply chain risk designation under a different statute remains in effect and is being challenged in the D.C. Circuit Court, meaning even a successful injunction may not restore Anthropic’s standing in the military supply chain immediately.
Tesla delivered 358,000 vehicles in Q1 2026—a 6% year-over-year increase but below analyst consensus of 367,000 and lower than any equivalent quarter in 2023 or 2024. Shares fell 4% on the news. Musk has been publicly pivoting Tesla toward humanoid robots (Optimus) and the driverless Cybercab, but neither product is on sale and a promised Q1 Optimus showcase did not happen.
Quotable:
“The supply chain risk designation was ‘likely both contrary to law and arbitrary and capricious.’” — Judge Rita Lin, granting Anthropic’s preliminary injunction against the Pentagon
Omer Khan · email · 5 mins
Parseur, a 6-person data-extraction tool (emails, PDFs, spreadsheets) built by Sylvestre Dupont, grew 60% year-over-year despite competing against UiPath, ABBYY, and ChatGPT — companies with 100x the resources. When AI made Parseur’s original rule-based engine obsolete, Sylvestre rebuilt it entirely from scratch using customer revenue, no outside investment. His edge isn’t a better algorithm; a customer can go from signup to extracting data in 10 minutes without talking to anyone, which wins the long tail of SMB buyers that enterprise tools ignore.
Simplicity compounds into a moat when your product stays horizontal but the interface stays dead simple. Parseur intentionally ignored “pick a niche” advice — the same tool that parses utility bills also handles pigeon genealogy PDFs. Early distribution came entirely from answering questions on Quora (not pitching), and Dupont dropped pricing from $49 to $9 just to remove friction; the learnings from those early users outweighed the lost revenue. A Zapier integration alone converts at 20–30%, because it plugs Parseur directly into workflows buyers already trust.
The constraint killing most six-figure SaaS founders isn’t product or marketing — it’s the founder themselves. Every customer escalation, hiring call, and product decision funneling through one person caps growth around $300K–$500K ARR. The shift to $1M+ requires building systems and hiring people who own outcomes (not just execute tasks), and accepting they’ll do it differently — the skills that built the first stage actively block the next.
Quotable:
“If your first step is ‘talk to sales,’ something is wrong with your product.” — Sylvestre Dupont, co-founder of Parseur, on why self-serve beats enterprise sales motions for SMB SaaS
Big Think · email · 3 mins
Uncertainty has reached a level where even once-unthinkable catastrophes are now routinely priced. Polymarket listed a contract on nuclear detonation in 2026 — bettors put “yes” odds at 22% before the contract was pulled after public backlash. A separate contract on the U.S. officially confirming alien existence sits at 17%. The list of things that will “obviously never happen” is shrinking by the week.
The dominant business response to uncertainty — maximizing efficiency by eliminating all slack — is exactly backwards for long-term survival. Kongō Gumi, a Japanese construction company founded in 578 A.D. and among the world’s oldest continuously operating businesses, built its longevity on values antithetical to modern efficiency culture. Personal resilience follows a similar logic: rock climber Tommy Caldwell was kidnapped by militants in Kyrgyzstan, lost an index finger in a separate accident, and still achieved the first free ascent of El Capitan’s Dawn Wall — suggesting that resilience is neither purely innate nor simply learned, but forged through accumulated adversity.
Quotable:
“The list of ‘things that will obviously never happen’ seems to shrink by the week, for better and for worse.” — Stephen Johnson, executive editor at Big Think
The Core · email · 7 mins
Trump declared the US-Iran campaign “nearly over,” but markets read the statement as prolonging conflict rather than ending it — oil prices surged, stocks fell, and US treasury yields rose. The Houthis have joined the battle, threatening to bomb Israel and close Saudi oil’s exit via Bab-al-Mandab by attacking ships passing through the strait, while the UK is assembling a 35-nation coalition to force open the Strait of Hormuz. Brent crude approached $120/bbl following Trump’s threat of “extremely hard” strikes on Iran.
The Strait of Hormuz closure has pushed Asian LNG spot prices to $20–25/MMBtu (per Crisil Intelligence), directly strangling India’s manufacturing sector. The HSBC India Manufacturing PMI fell to 53.9 in March from 56.9 in February — the slowest expansion in nearly four years — as firms absorbed sharply higher oil-linked input costs. Firozabad’s glass industry, a historic manufacturing cluster, has forced thousands of workers out of jobs due to dwindling gas supplies.
The Indian government insists there is no cooking gas shortage, but trains leaving Mumbai for Bihar are packed with people fleeing a city where they cannot get fuel to cook. India’s attempt to curb Reliance’s profitable fuel exports by levying additional duties is considered too weak — with Europe’s diesel futures at $200/barrel, refineries can simply pay the duty and still profit handsomely from exports. The government is also delaying oil company retail price hikes because state legislature elections are underway in Kerala, Tamil Nadu, Puducherry, West Bengal, and Assam.
India’s defence exports hit $4.1 billion in FY26, a 62% jump over FY25 (Rs 38,424 crore vs Rs 23,622 crore), with PSU exports alone surging over 150% year-on-year. India now exports to more than 80 countries, with the product basket expanding from spare parts into ammunition, artillery systems, radars, and complete platforms. The number of authorised private exporters rose to 145 firms from 128.
Quotable:
“The government continues to maintain that there is no cooking gas shortage, but trains to Bihar leaving Mumbai are packed with people fleeing a city where they get no fuel with which to cook. Clearly, of some type of gas, there is no shortage.” — on the gap between official assurances and ground reality
Bloomberg Opinion · email · 7 mins
Trump called NATO a “paper tiger” — borrowing Mao Zedong’s mid-20th-century phrase — because European allies won’t back his war with Iran. The irony is thick: the US itself is depleting its arsenal faster than it can replenish it. Andreas Kluth reports the US military is roughly a month from running out of several types of missiles and interceptors, and just replacing the Tomahawks already fired will take approximately five years.
Retired Admiral James Stavridis (former NATO supreme commander) says Trump could actually get European allies to help clear the Strait of Hormuz — but only if he stopped insulting them. Rather than demanding participation in offensive operations, a diplomatic ask for sea-control operations in the strait would draw on significant European air and maritime capability to relieve an overstretched US Navy.
France, the UK, Japan, and ~30 other countries (excluding the US and Israel) are pursuing their own exit from the crisis. Meanwhile, China — in collaboration with Pakistan — has floated a five-point peace proposal. Beijing holds real leverage: it has ties with Iran, working channels with Israel, and a functioning relationship with Washington, making it a more acceptable interlocutor for Tehran than Western powers. But Karishma Vaswani is skeptical — the plan has no ceasefire mechanism, making it paper-thin.
Dubai’s golden age may be ending. The UAE’s sovereign wealth is estimated as high as $2.5 trillion, but its model depends on the Strait of Hormuz remaining open and the region staying investable. Lionel Laurent argues that international capital — which boosters claim has “nowhere else to go” — could in fact flow to Geneva, Milan, or a recovering Hong Kong. Freshly departed Dubai expats confirm: being on a war’s frontline was not what they signed up for.
Quotable:
“The US military is a month or so away from running out of several types of missiles and interceptors. Just replenishing the Tomahawks that the US has already fired will probably take five years.” — Andreas Kluth, on American military overstretch in the Iran conflict
FT Opinion · email · 3 mins
One year after “Liberation Day” (April 3, 2025), Oren Cass contends the tariffs have defied the doom-laden consensus: the dollar weakened as intended, countries chose negotiation over retaliation, and reached agreements favorable to the US — the opposite of the tit-for-tat trade war economists predicted. American manufacturing has seen a positive impact.
The underlying case is empirical over theoretical: globalisation worked flawlessly in economic models but failed in practice — hollowing out manufacturing communities, creating trade deficits, and concentrating gains narrowly. Tariffs, widely derided in theory, are one year in showing greater real-world promise than their critics allow.
Quotable:
“Economists often observe the real world and ask, ‘but does it work in theory?’” — Oren Cass on the gap between economic consensus and lived outcomes from globalisation