The Cournot equilibrium explains why AI labs maintain structurally profitable frontier access even while losing money overall
[TBPN] The Cournot Equation β’ Watch β
Antoine Cournot's 150-year-old economic model describes oligopolies that compete on supply rather than price. With only three or four major AI labs at the frontier, the market doesn't equilibrate to zero-margin perfect competition β it stabilizes at a high price for frontier access. Labs signal compute investments to each other (Microsoft pauses, AWS goes all-in) and respond accordingly, creating a game of chicken around who builds the most data center capacity. The result: frontier AI access will remain expensive for the foreseeable future because the structural incentives prevent any one player from breaking ranks and cutting prices to commoditize.
"Everyone's sort of keying off of each other... if someone's buying 10 billion of compute over here, they're going to counter with eight over there or try and jump to 12. It's why Semi Analysis exists. The corno equilibrium comes when a small number of labs β an oligopoly β effectively choose supply at the frontier level and then the market clears at a high price for frontier access."
AI labs actually run two separate businesses inside one P&L β a depreciating training asset and a manufacturing-margin inference factory
[TBPN] The Secret Economics of AI Labs β’ Watch β
The reason AI labs appear deeply unprofitable while simultaneously having healthy gross margins is that they hide two very different economic structures inside a single P&L. The first is model training: a massive upfront capital expenditure that produces a depreciating asset β each model generation declines in value as a newer frontier model is released. The second is the inference factory: a manufacturing-margin business with positive contribution margins (you can verify this by comparing API prices to open-source commodity inference costs). Companies lose money because training costs are scaling exponentially, but the underlying unit economics of inference are healthy. GPT-4, which cost roughly $100M to train, generated billions in revenue before it was superseded.
"They're the most unprofitable companies in human history, I think. But at the same time, there is an economic rationality behind all of this... Each model makes money, but the company loses money. The equilibrium I'm talking about is an equilibrium where model training scale-up has equilibrated more."
Micron is spending $200B to break the AI memory bottleneck β the biggest supply crunch the memory industry has seen in 40 years
[TBPN] The Cournot Equation β’ Watch β
For decades, memory chips were low-margin commodity products. AI's insatiable demand for data transfer at inference time has inverted this. Micron, the largest American memory chip maker, is rushing to add manufacturing capacity at a scale that dwarfs most prior industrial investments. The downstream effects are already visible: the PlayStation 6 has reportedly been delayed to 2029 partly due to memory chip shortages. The bet signals that memory β not compute β may be the next critical bottleneck in the AI infrastructure stack, and that whoever controls memory supply at scale will capture enormous value.
"Micron is spending $200 billion to break the AI memory bottleneck. For decades, memory chips were low-margin commodity products. Now the industry can't make enough to satisfy data centers' hunger. This is the biggest supply crunch the memory industry has seen in more than 40 years."
Having a consumer product (not just an API business) gives AI labs crucial pricing leverage β Claude Code is Anthropic's version of this hedge
[TBPN] The Secret Economics of AI Labs β’ Watch β
One critique of Anthropic's business has been that it's overwhelmingly API-dependent, forcing it onto a permanent training treadmill β users switch to whatever model is smartest. A consumer product breaks this dynamic: users develop workflows, habits, and expectations around the experience layer, not just raw model intelligence. Claude Code's fast internal adoption at Anthropic itself β and its subsequent external launch β gives Anthropic a sticky, product-layer moat that partially insulates it from model commoditization and reduces the pressure to always be at the bleeding frontier.
"Having a product, not just an API business, gives you leverage because at some point the models are smart enough where you don't need to train them... you don't need to train a model that is 4% better because people are still coming to your application and having a good product experience. Anthropic has Claude Code now, which gives them more leverage over the market."
Complex software's "tech debt" is actually a competitive moat β it encodes millions of solved edge cases that AI cannot easily replicate
[TBPN] The Cournot Equation β’ Watch β
The common fear is that AI will allow anyone to clone any SaaS product overnight. The hosts distinguish between "feature companies" β simple apps that are genuinely vulnerable β and deeply complex platforms where years of accumulated edge cases, customer-specific integration work, and institutional knowledge constitute an invisible moat. The very thing branded as a liability ("tech debt") is actually what makes these platforms hard to replicate: they've already encountered and solved the nine million things that go wrong in production at scale. An AI starting from scratch would need years of real-world usage to discover the same edge cases.
"There are like nine million edge cases. The tech debt sounds bad because it's a pejorative β debt equals bad. But actually it's good because what you've done is you've uncovered every single problem that can go wrong. The AI is going to have to discover all of those edge cases from scratch."
Video editing may be the last white-collar job to be automated β because the critical decisions aren't logged in any structured format
[TBPN] The Cournot Equation β’ Watch β
The argument that "all white-collar work will be automated because it's all logged" breaks down for creative editorial roles. The valuable decisions in video editing β what to cut, when to linger, when to "kill your darlings" β are not stored in anything analogous to GitHub's pull request log. The final product exists; the cutting room floor doesn't. Editors can record their screens, but the reasoning behind each cut is not systematically captured in machine-readable form. The Matrix example is instructive: a half-second doorknob reflection shot with elaborate VFX work demonstrates that the ratio between production effort and screen time is invisible to any AI trained only on the output.
"The valuable decisions in editing aren't neatly organized in the way a GitHub log is, with poll request discussions. There's this whole concept of 'kill your darlings' β like you need to be cutting more. Those decisions sort of get chronicled but they don't get neatly organized in the way a GitHub log does."
GPT-4 cost roughly $100M to train β and quickly generated a multi-billion dollar revenue run rate, proving AI training unit economics can work
[TBPN] The Secret Economics of AI Labs β’ Watch β
GPT-4 is the most instructive data point in the AI economics debate. Despite the enormous training cost at the time, the model was considered expensive β yet it was so capable that it generated a multi-billion dollar annual revenue run rate well before it was superseded. The inference margins were healthy enough that OpenAI made back its training investment and more. The question now is whether this pattern holds at 10x or 100x training costs, and how quickly new models depreciate in commercial value.
"GPT-4 is the really instructive example because I believe that model cost like a hundred million dollars to train and it was really expensive at the time, but then very quickly they were on a multi-billion dollar run rate... it was very clear that based on the inference margin and the ChatGPT Pro subscriptions that they made all the money back from GPT-4 and more."
India's Adani Group committed $100B to AI infrastructure by 2035 β the largest single such commitment in India's history
[TBPN] The Cournot Equation β’ Watch β
The AI infrastructure investment race has gone sovereign and global. India's Adani Group β an energy and logistics conglomerate β announced a $100 billion commitment to develop large-scale AI data centers by 2035. This dwarfs prior Indian tech investment announcements and signals that the Cournot game is expanding beyond US-based labs to national-scale players. However, the hosts note with some skepticism that 2035 is beyond even the most aggressive AGI timelines, raising questions about whether this capital will be deployed before the landscape it is targeting has fundamentally changed.
"India's Adani Group to invest $100 billion in AI infrastructure. The Indian conglomerate's investment may boost the country's ambitions to become an AI power. They said they would invest $100 billion to develop large-scale data centers by 2035 β the largest such commitment in India so far."
When model training scale-up plateaus, AI labs exit the Cournot game and enter Bertrand competition β resembling cloud economics, not a winner-take-all market
[TBPN] The Secret Economics of AI Labs β’ Watch β
The long-run equilibrium for AI economics depends critically on what happens to training scale-up. If frontier model capabilities plateau β if there's a "final model" that handles all knowledge work β labs shift from Cournot (supply competition, high margins) to Bertrand (price competition, compressed margins). This doesn't mean profits go to zero; cloud hyperscalers show that oligopolies with massive capital barriers can sustain meaningful margins. But it does mean the current period of explosive lab profitability is time-limited, and explains why many VCs are now investing in multiple labs rather than betting on a single winner.
"At that point it does commoditize and you drop out of the Cournot equilibrium and you become more like the hyperscaler cloud market... there will be competition between the major three or four labs and it will be much more about how can you marshal enough supply to create a huge barrier to entry. It's why a lot of VC firms are getting in multiple companies β they don't think it's going to be winner-take-all anymore."
Warner Bros. is resuming Paramount merger talks β Hollywood's consolidation is being driven by streaming economics that make scale non-optional
[TBPN] The Cournot Equation β’ Watch β
Lucas Shaw's reporting on Warner Bros. resuming Paramount talks reflects the inevitable math of streaming: production costs are fixed, distribution is global, and only a handful of content libraries are large enough to justify standalone streaming services. The hosts note that Hollywood studios are playing their own version of a supply-game oligopoly β but one where the endgame is consolidation into two or three mega-studios, not sustained competition. The AI-generated content (like Seedance 2.0) discussed in the same episode adds an additional disruptive threat from below, potentially collapsing content production costs faster than consolidation can save the incumbents.
"Warner Brothers is going to resume talks with Paramount after two months of rejecting them and playing mind games. The company still says it's committed to Netflix, but needs to find out just how much the Ellisons will offer."
OpenAI Snaps Up OpenClaw, Yahoo's AI Search Engine 'Scout,' Inside Anduril's $60B+ Valuation Talks
TiTV (The Information) β’ Watch β
OpenAI's acqui-hire of OpenClaw founder Steinberger is a direct competitive response to Claude Code β but the two products target fundamentally different audiences
[TiTV] OpenAI Snaps Up OpenClaw β’ Watch β
OpenClaw went from a viral open-source side project to an OpenAI acquisition target in weeks, following Claude Code's explosive growth. OpenAI wants Steinberger to build personal AI agents β not just coding agents. The distinction matters: Claude Code is deliberately scoped (you specify a folder, a task, and when it's done it stops), making it enterprise-safe. OpenClaw is general-purpose and persistent β it can run in the background, check email, send Slacks, interact with any web page. These represent two different theories of how AI agents will enter the world: enterprise-first with guardrails, vs. developer-first with maximum freedom. Anthropic's earlier request that Steinberger rename "Claudebot" may have foreclosed any partnership path with Anthropic.
"I think Claude Co-work will have a greater in with enterprises because it offers more security. It is kind of more narrowly scoped... In contrast, OpenClaw is very general purpose. OpenAI is interested in tapping into Steinberger's talent β he clearly has a finger on the pulse of what developers are interested in in terms of agents."
OpenAI is creating a nonprofit foundation to house OpenClaw β an uncomfortably familiar structure that echoes OpenAI's own founding nonprofit shell
[TiTV] Why OpenAI Hired the OpenClaw Founder β’ Watch β
To preserve Steinberger's commitment to keeping OpenClaw open-source, OpenAI agreed to house the project in a new nonprofit foundation. The hosts immediately note the irony: OpenAI itself was originally a nonprofit, and has spent years in legal and governance battles trying to restructure away from that original nonprofit shell. Now it is voluntarily creating another bespoke nonprofit structure to accommodate an acquisition. The open-source commitment means OpenAI is essentially funding a public good β anyone can download, fork, and modify OpenClaw β while presumably benefiting from Steinberger's talent and attention being directed at OpenAI's agent products.
"OpenAI is going to set up a foundation that will house the Open Claw project... it's sort of on brand for OpenAI to create a kind of bespoke nonprofit structure in order to accomplish a very particular mission. Bespoke and nonprofit and then maybe somewhat regretting the way that they structured it."
Pinterest has 80 billion searches per month and a uniquely valuable Gen Z/millennial women user base β but can't convert either into reliable ad measurement
[TiTV] 'Code Red' and AI Debates at Pinterest β’ Watch β
Pinterest's core problem is a measurement gap, not a traffic gap. The platform has enormous search volume, a demographically premium user base (Gen Z and millennial women β the most valuable category for advertisers), and strong high-intent categories (clothing, beauty, furniture). But advertisers can't reliably measure the return on their Pinterest spend, which makes them reluctant to scale budgets. Pinterest's "code red" is an internal push to fix the attribution model and the ad recommendation algorithm. The CTO reports early progress β reallocating GPUs to iterate faster on ad modeling β but the stock's 20% drop signals that investor patience for a self-directed turnaround is running thin.
"Pinterest has 80 billion searches per month... They have billions of images and posts on their site. The user base is typically Gen Z and millennial women β the most valuable categories for advertisers. The question is can they fix the monetization and fix the ad business."
Anduril is seeking to raise billions at a $60B valuation β double its 2024 round β on the back of autonomous submarines, mixed-reality headsets, drones, and solid rocket motors
[TiTV] Anduril's Massive $60B Defense Bet β’ Watch β
Anduril's valuation trajectory reflects how dramatically defense tech has been re-rated. The company won a multi-billion dollar autonomous submarine contract from the Australian Navy, took over a nearly $20 billion mixed-reality headset contract from Microsoft (for soldiers), and has expanded into counter-drone technology, drones, and solid rocket motors. Its ambition is to join the Raytheon/Lockheed/Northrop club β a decades-old prime contractor oligopoly β by growing faster than any of them have in their histories. The current fundraise is described as a "land grab" moment: the stars are aligning under the current administration, and Anduril is racing to capture as much territory and contract value as possible before the political window narrows.
"Anduril is going to raise billions of dollars yet again. The valuation they're talking about is in the $60 billion range β roughly double the price from last year. This is a company that, if you've stopped paying attention to them in recent years, has really expanded their product set."
Steinberger didn't go to Anthropic β partly because of an earlier IP dispute over the name "Claudebot" β showing how small early friction can determine billion-dollar outcomes
[TiTV] Why OpenAI Hired the OpenClaw Founder β’ Watch β
When OpenClaw (then called Claudebot) first went viral, Anthropic asked Steinberger to change the name because of the Claude trademark. This seemingly minor IP dispute may have foreclosed what could have been a natural partnership β Steinberger's general-purpose agent framework would complement Anthropic's enterprise-focused Claude Code. Instead, Anthropic's enterprise orientation, its historically ambivalent relationship with open source, and the early friction all combined to steer Steinberger toward OpenAI. The lesson: how a company handles early-stage open-source projects that use adjacent branding can have lasting competitive consequences.
"It's interesting that Anthropic is not where Steinberger ended up... three is that Anthropic and Steinberger got off on kind of an interesting footing because Anthropic asked him to change the name of Claudebot. You'll recall OpenClaw was originally called Claudebot and Anthropic saw that as sort of stepping on their intellectual property."
Anduril has grown successfully under both Trump and Biden β its moat is technological and operational, not primarily political access
[TiTV] Anduril's Massive $60B Defense Bet β’ Watch β
The narrative around Anduril often focuses on its Silicon Valley founders' alignment with the current administration. But the reporter's analysis is more nuanced: Anduril was founded under the first Trump administration, grew significantly under Biden, and is now accelerating under Trump again. Its core advantage is technological differentiation β building AI-native, software-defined defense systems faster than incumbents β not purely political access. The current moment represents a window of heightened opportunity, but the underlying business is secular. The real test will be the US Air Force autonomous fighter jet contract: winning that would confirm Anduril as a true prime contractor, not just a buzzy startup.
"This company was founded under the first Trump administration. It grew significantly under the Biden administration... so it is it has grown and been successful under two different political regimes. A big moment we're gonna see is whether the US Air Force awards Anduril its very lucrative autonomous fighter jet program."
Pinterest's "identity crisis" β is it social media, search, or e-commerce? β is actually its greatest vulnerability in an AI-search world
[TiTV] 'Code Red' and AI Debates at Pinterest β’ Watch β
Pinterest is trying to reposition itself to investors as a "search platform" β not social media. This repositioning is strategically rational: search platforms have stronger advertising models (high purchase intent) and are more defensible in an era of AI-generated social feeds. But the identity confusion is real: advertisers, users, and analysts don't agree on what Pinterest is for. In an era where AI-native search can create personalized visual mood boards on demand, Pinterest's category ambiguity leaves it uniquely exposed. Google is already building Pinterest-like features into its own AI search. Pinterest needs to own its identity before someone else defines it for them.
"I think one thing I realized as I've been thinking about this conversation is that when I talk about Pinterest, sometimes I refer to it as a digital scrapbooking platform, sometimes social media. And so I think this identity crisis β where advertisers don't really know what the one thing is that we should be going to Pinterest for β that's sort of a problem everyone has right now."
OpenClaw's founder was spending $10,000β$20,000/month of personal funds on the project β and was overburdened by support requests β before OpenAI stepped in
[TiTV] OpenAI Hires OpenClaw Founder β’ Watch β
Steinberger's decision to join OpenAI rather than raise VC funding was driven by personal sustainability, not exit economics. Open-source projects with viral traction create a cruel trap: massive usage generates massive support burden and API costs, with no corresponding revenue. At $10,000β$20,000/month in personal inference and development costs, OpenClaw was burning Steinberger's personal savings. He had VCs interested and could have raised capital, but explicitly said he didn't want to build another company. OpenAI offered him the ability to keep building the tools β his actual interest β without the organizational overhead. This pattern will recur: the most talented open-source builders will be systematically acquired by labs that can subsidize their infrastructure costs.
"Steinberger has been very vocal that he didn't have the personal time to deal with all of the requests people were sending in... He was also burning cash, spending money on developing it β I think he had said $10,000 to $20,000 a month. OpenAI stepped up and started footing some of that bill."
Pinterest stock fell 20% in a single session after Q4 earnings β with ad revenue growth deceleration as the central investor concern
[TiTV] 'Code Red' and AI Debates at Pinterest β’ Watch β
The magnitude of the stock drop β 20% in a single day β reflects how little margin for error Pinterest has with investors at its current valuation. Ad revenue deceleration at a platform-stage company is treated as an existential signal, not a temporary setback, because the alternatives are limited: you can't accelerate ad revenue growth without either growing the user base (slow) or improving ad targeting/measurement (hard). Pinterest has been working on both for months β declaring an internal code red before the earnings, not after β but the market's reaction shows that the improvement hasn't been fast enough or visible enough in the reported numbers.
"Shares of Pinterest are trying to rebound this week after earnings triggered a 20% single-day plunge last week. At the center of that story is revenue growth that is decelerating. Pinterest declared its own code red well before this stock drop."
OpenAI's long-term goal for OpenClaw is consumer accessibility β "Steinberger's mom should be able to install it" β signaling a mass-market personal agent roadmap
[TiTV] Why OpenAI Hired the OpenClaw Founder β’ Watch β
The current version of OpenClaw requires technical expertise to set up correctly β configuring permissions, implementing security precautions so the agent doesn't accidentally expose sensitive data. Steinberger has explicitly stated his goal is to make OpenClaw accessible to non-technical users. Under OpenAI's umbrella, the path is to start with more guardrails (resembling Claude Code's enterprise approach), then progressively unlock capabilities as security challenges are solved. The endgame is a personal AI agent that anyone β not just developers β can run persistently in the background, managing calendar, email, Slack, web browsing, and more. This is the consumer agent wave that has not yet arrived.
"Steinberger has said his goal is to get OpenClaw to the point where even his mom could download it and install... What we've talked about so far is that OpenClaw is difficult to set up. It takes a technical user to do that. But that's the direction we're going to be seeing from OpenAI β it just may take some baby steps to get there."
Pinterest's most likely exit is acquisition β potentially by a company like OpenAI that needs a visual search and shopping data moat
[TiTV] 'Code Red' and AI Debates at Pinterest β’ Watch β
The reporter who originally predicted "OpenAI should buy Pinterest" stands by the underlying logic: Pinterest has 80 billion searches per month, a Gen Z and millennial women user base that is uniquely valuable for advertisers, and billions of tagged images and product posts. An AI company building visual search or shopping agents would immediately benefit from this data at a scale that would take years to replicate organically. If Pinterest cannot fix its ad business independently, a sale becomes the most rational outcome. The question is whether the turnaround buys enough time to avoid a distressed sale β which would significantly reduce the multiple.
"I think there's no doubt there is a ton of underlying value in Pinterest's platform... I think if they are able to fix the ad business at least in the short term, that will keep Wall Street happy. But if the ad business isn't able to recover, then the expectation is they would think more seriously about a potential sale."