Podcast Digest

#416 The Relentless Missionary Creating AGI: Demis Hassabis

Founders
Demis Hassabis

I'm a weird British outlier on this little island here, and I've made my own path. I followed my passions and tried to stay true to what I believe in, and I'm going to carry on doing that. This is my mission, so I will do it 100%.

It is literally just the first level of what's coming. This is a paradoxical moment, which I guess is sort of messing with my mind. It should feel amazing realizing all these dreams that we've had for more than 15 years, but it doesn't feel like how I imagined it would feel.

The way it's going is this mad rush. I've had to make my peace with that, recognize that it's going to be messy, and I'll just have to do my best, and maybe we, being the world, will muddle through somehow. I'm optimistic still.

Demis Hassabis

I am first and foremost a scientist. My goal is to understand nature, but doing science is sort of like reading the mind of God. We humans have these faculties. The world is understandable, but why should it be that way? I think there's a reason.

Computers are just bits of sand and copper. Why should these combine to do anything? I mean, it's absurd. The electrons move around and then that creates an AI system that can defeat a Go master. Why should that be possible? This is beyond evolutionary coincidence.

We can build electron microscopes and interrogate reality down to the most minute level. We can build systems that detect black holes colliding from more than a billion years ago. I mean, what is this? What the hell is going on here?

I sit at my desk at 2 a.m. and I feel like reality is staring at me, screaming at me, literally screaming at me, trying to tell me something if I could just listen hard enough. That's how I feel every day. So you can see why I'm trying to build AI.

I felt that since I was very young, that there's a deep, deep mystery about what's going on here. You can frame it however you want. You can call this God's design, or you can say it's just nature. I'm open-minded about the description. At the moment, we don't really know what time is or gravity is or any of these things. So there's a mystery waiting to be solved, and it encompasses just about everything. I would like to understand, and then I'm perfectly fine to shuffle off my mortal coil.

Shane Legg

Demis has an extraordinary level of determination, unlike pretty much anybody. Astonishing, incredible determination. That is his most defining characteristic. Just unbelievable determination. He works, sleeps, eats, breathes the mission 24 hours a day to a degree that I haven't seen with other people.

No hobbies. Football — he's a big fan of Liverpool. But other than that, it's the mission.

Demis tells a story about his father saying whether you win or lose, the really important thing is that you try your best. Demis took that very literally, as in absolutely try the absolute, absolute, absolute best you can possibly do, pretty much to the point of breaking yourself. That's how he is 24-7.

I don't think his father meant his comment in quite the literal sense. Try your best wasn't supposed to mean try literally to the point of destroying yourself. Go absolutely, completely, 100%. But that's how Demis understood it. There is no 50% mode in Demis. There's not even a 99% mode in Demis. There is only 100%.

Demis Hassabis

The slightly warped way I took this was, how do you know if you've done your best? The only way I could know is if I basically push myself to the point just before death, because that is literally when you've done your best. If you die — and by die, I mean burnout or something — then you've slightly overdone it.

It's like running a marathon. You have to basically fall over the line. And then ideally, you should be hospitalized, but not dead. That's when you can say you've done your best. If you've got any energy left and you're still standing, maybe you could have tried harder.

David Levy (host)

Demis discovered this book called the Chess Computer Handbook written by David Levy. Levy introduced Demis to the themes that would animate his lifelong quest to build artificial intelligence. The marriage of computing and chess united Demis' two worlds. He read the book in one sitting.

Twelve-year-old Demis set out applying Levy's principles. He built a computer program to play Othello. The program proved intelligent enough to beat Demis' little brother. Demis said, it was amazing that I made something that could beat him.

Demis' experience at Bullfrog answered his big question. His mission and purpose would be to build artificial intelligence. Molyneux and the book Gödel, Escher, and Bach had planted the idea that computers would soon do whatever the brain could do. Ian Banks gave him this applied utopian vision of what AI's realization could mean — boundless human flourishing.

Demis recalled: I decided then that I was going to dedicate my career to working on AI. I had already had the kernel of the idea for what eventually became DeepMind.

David Levy (host)

Peter Thiel began to think this project was an A plus on the science and maybe an F on the business model. But he had a further thought. Demis was an extreme case of an authentic entrepreneur — not a mercenary who starts with a desire to get rich from a startup, then casts around for a plausible idea, but rather a missionary who feels compelled to work on a particular challenge, then starts a company as a way of tackling it.

The good thing about missionaries is that they never quit. Even if they have to work around the clock and pay themselves nothing, they will keep obsessing about the problem. Peter said, I always say that people aren't really entrepreneurs in the abstract, but there's maybe one great company that somebody has in them. It was Demis' destiny to build this one.

Demis Hassabis

We only wanted hardcore believers. We would go to these conferences and tell people we're starting an AGI company. 80% of the people would roll their eyes at us, literally roll their eyes at us and turn around and walk away. We figured that this was a very efficient way to discover who we should be talking to.

The culture of academia could be both boringly cautious and terrifyingly competitive. Boring because it pursued incremental advances. Terrifying because scientists cut each other's throats to be the first to publish. At DeepMind, we were promising the opposite experience — the thrilling pursuit of the big leap and the near absence of rivals. We are going to do stuff where there's no competition because no one thinks it's possible.

Demis Hassabis

I was having these inane conversations nonstop with investors. I felt my brain was atrophying. I'm talking about the biggest invention ever. And they keep coming back to where's the widget. And I'm like, I'm going to revolutionize all widgets so I can pick you a random widget if you want me to. But you obviously haven't gotten the point if you're asking me this.

Larry Page was basically telling me, maybe you could build a company like Google, but it would take the best part of your career. If my real mission was to build AGI, then why don't I use all the resources that he's accumulated? I thought that was a pretty good argument.

Would I be happier looking back on building a multi-billion dollar company or helping solve intelligence? It was an easy choice. I was fed up with scrambling around trying to justify what I knew was the biggest thing of all time. I just thought, look, I'll go to Google, I'll get a shitload of computers, and then I'll solve intelligence.

David Levy (host)

If you pattern match what humans do, it's not going to take you all the way to beating the top human. The system needs to discover new moves which aren't human-like. They needed to build a machine that would search the infinity of permutations in Go and come up with entirely novel strategies.

The early version of the system played as a human would. It rediscovered certain strategies that humans had learned over millennia. Then it discovered that certain time-honored human strategies can actually be counteracted. So it discarded them. As the system became stronger, it played like nothing anyone had ever seen. It came up with a style that was completely alien.

Learning only from self-play, the system outclassed its predecessor by a mile. By unshackling itself from human wisdom, the model had discovered strategies unknown to mortal players, arriving at a new understanding of the game's mysteries humans had not understood. How little they had understood. AI stood in judgment over centuries of human wisdom, vindicating some verdicts and tossing out others.

Demis Hassabis

The way AI has developed is a bit like the Industrial Revolution. It developed in a certain way but that was kind of lucky. Suppose at the start of the Industrial Revolution we had found out about energy and engines, but then imagine that there were no coal or oil in the ground. After all, there didn't have to be.

Dead dinosaurs and ancient trees just waiting there for 60 million years, ready to be dug out? It's kind of unreasonable if you think about it. Why wouldn't they just decay in the ground and become useless? Quite convenient that they didn't. And maybe that speaks to another conversation we could have about what's really going on here.

The analogy is the Internet has been for AI what coal and oil were for the Industrial Revolution. You could just literally drill a hole in the ground and get black gold. Today, we can just download all of the Internet. Neither of these resources had to be there. The dead dinosaurs are the Internet. Humanity built the Internet for a different purpose. And kind of amazingly, we woke up one day and realized that we've got the equivalent of oil.

David Levy (host)

Demis also found himself in a very volatile environment. When Demis had a bad game, his father would erupt. Demis recalled: There was one time I lost horribly. My dad went mental. He was screaming, how could you have done this? How could you have done this? It was just awful. We were in some hostel and he was going on about this screaming. And this used to be a regular occurrence with my dad.

And I finally said to him, this is ridiculous. I obviously tried my best. I'm not intentionally losing. And then that was that. I wasn't going to take it anymore. That was the last time I remember him screaming at me.

Chess consumed every weekend and every day of school vacation, squeezing out the easy recreation of a normal childhood. Demis could barely imagine what just living might mean. He had never tried it out.

Demis Hassabis

Who would have thought that you can actually inspire people too much? Well, you can, because you can get to the point where you're diluting your team and then they are diluting you also.

It's like I'm making this judgment that this is possible because the engineers are telling me it's possible, but they're only telling me it's possible because I've over-inspired them. So in fact, none of us were getting real feedback.

His co-founder described how to debate Demis: You had to push the conversation to the point where he got more and more intense and defended his positions more and more strongly. The stronger he got, the closer you were. Then eventually he might go quiet. That's when he absorbed the message.

Demis Hassabis

This is wartime. OpenAI and Microsoft have literally parked the tanks on the lawn.

I think there's a question for anyone trying to build AGI. What are your reasons for building it? My reasons are scientific. Some are definitely building it for other purposes.

I'm doing this for knowledge and science. This is my whole life's work. I have to do what's necessary. The mission is in me. It is infused in me. You can't separate it from me. I'm definitely not denying I can be strong-willed or difficult. I think I have to be. If I was like a reed in the wind, I wouldn't be doing my job as a leader.

David Levy (host)

Demis set about preparing his troops to think differently. He declared that DeepMind's broad portfolio of blue sky research bets would have to be pared back. The company would stop publishing mission-critical research that competitors could copy. It would focus on engineering and not just science. Researchers would have to make the mental shift from peacetime to wartime.

DeepMind embraced a strict unity — all team members poured their energies into improving one single model. Next, they embraced meritocracy. Any team member was welcome to propose an improvement and test it. If the upgrade boosted performance, it was added to the master code. Seniority, force of personality, dazzling theoretical claims as to why something should work — none of that affected what went into the program. Only measurement mattered.

The word Demis used the most was relentless. Relentless progress, relentless shipping, a relentless production machine for innovation. Less than two years after the messy shotgun marriage that created Google DeepMind, his team had closed the technical gap. It was a considerable achievement.


This Startup Secretly Detects Fraud For Fortune 500s

Y Combinator · Listen
Karine Malata

Variance is building purpose-built AI agents for risk and compliance. We automate content review, fraud reviews, identity reviews at scale. We're powering some of the largest companies in the world — Fortune 500s, marketplaces. We've been working with GoFundMe to review all of their fundraisers at scale, and some Fortune 50 companies for verifying all of their sellers and complex UBO verifications.

Variance usually deals with really sensitive data and sensitive issues. The phrase I like to use is that we're building the systems that are often used by the bad guys, but we're building them for the good guys.

Oftentimes it's really hard to market the use cases that customers are using Variance for because those issues are so sensitive. If we were to market those, then it may create more fraud, more abuse, more bad. We're in the shadows. Even far beyond the Series A, we'll always be a company that's a little bit more in the shadows. And I think that's OK for us.

Karine Malata

GoFundMe is actually very crisis-driven. If there is any sort of large event or natural disaster, there's usually going to be a spike in fundraisers. It's really hard for the team to keep up with how many of these fundraisers are actually real or possibly fraudulent.

One of the examples we saw — recently there was the murder of Charlie Kirk. There was a spike of fundraisers for the family of Charlie Kirk. How do you know who's actually related to Charlie Kirk and who's just trying to raise money for their own?

There's a lot of behavioral signals you can use. You have information on the identity, what that account has done in the past, and information at the fundraiser level — the image, the bio. The Variance AI agents use all of that context and the GoFundMe terms of service to decide whether or not that should be allowed on the platform. That work used to be done by human analysts and now can be fully automated in a much more consistent manner.

Karine Malata

If you sign up to do any sort of business online, whether it's for a marketplace or a financial institution, they have the compliance requirements to verify that you are actually linked to the business you say you own.

For example, I sign up to say I'm going to be doing business with Variance. Well, the legal name of my company is Decoy Technologies, and it is tied to Karine Malata. That's a simple example, but building that graph is really hard. Oftentimes you're going to see companies have multiple shell companies, tied to multiple different agents and different identities. Within that really large graph, you expand the area of risk — one of these nodes could be in a sanctioned country, one could have adverse media on them, possibly have been to court for money laundering. Companies are required to conduct these investigations at scale, and at the moment it's entirely manual.

Karine Malata

There are really only three building blocks you need for AI agents. You have the compliance documents — the standard operating procedures, what the company deems is necessary to verify at onboarding or any other part of the lifecycle. Once we have those compliance documents, the AI agent can do its work using tools that we built and data, internal or external. Those are the only building blocks you need to automate complex KYC, complex KYB, complex content review.

We have access to over 100 business registries across the world, which makes us international, and our AI agents also have access to the open web. A lot of unstructured data is found directly on the web. Access to the web was actually one of the final pieces that made this whole problem really hard to automate — because the human analyst, a big part of what they would do is Google for names, look at what comes up, and apply judgment. Without having an agent search the web, you can't even trace back the whole graph of abuse.

Karine Malata

The data problem was really the core, hardest technical challenge. If you need to verify a fundraiser, the data is going to be scattered around the user identity data — login behaviors, the devices they've had, the PII they onboarded at the beginning. You also need information about the business, all the information on the fundraiser itself and its history.

Whether you're a financial institution or a marketplace, that data is going to be scattered across five to ten different systems, in different data stores. And sometimes that data is hidden behind a UI. The only way our AI agents can scoop up that data and reason over it is to directly scrape from a UI that was built for a human.

Very recently, we've boarded this third integration method — spinning up a browser, opening up a really old review tool that was built for a human, pulling that data, and then reasoning over it.

Karine Malata

Companies usually have a patchwork of deterministic systems. They have rules that say if this transaction is over a thousand dollars, then do this. They have classifiers that are good at detecting one specific flavor of abuse. Then they have humans, who are good at understanding all these different signals in context and making a final decision.

The most important feature of a fraud system is that it needs to evolve really rapidly with a really tight feedback loop. But with rules engines, classifiers, and humans — humans being really slow and a little bit inconsistent — that feedback loop can only be so fast. You could never achieve a self-healing system that could thrive in a dynamic environment. And fraud is the most dynamic environment because you always have adversaries.

Now AI agents close the loop. They can materialize any features a rules engine would. You don't need a classifier anymore because AI agents can read standard operating procedures and reason over images and unstructured data. You don't need human reasoning anymore. You have this fully self-healing system, which at scale allows companies to ship faster and open new product lines without this bottleneck.

Karine Malata

During the elections, we had one customer — a Fortune 500 hosting large communities, fairly politically exposed. Because our AI agents had access to the context of entities in relation to other entities — how does this user fit into all the other users we're looking at — we were able to detect really complex fraud rings of state-sponsored actors pushing one narrative.

This wouldn't have been possible with one classifier in isolation looking at one piece of content after the other. Because AI agents can directly query our data stores, materialize features on the fly, and use one step to reason over what the next step and next tool call should be, we detected much more sophisticated fraud rings than you would have been able to before.

Some of the abuse vectors we've detected have had really serious physical implications — people making threats online of physical harm who have a plan to do these things at scale. Once it's detected and investigated by Variance, it's usually going to be in the hands of law enforcement.

Karine Malata

We're 12 people. We have five software engineers. We've remained very lean. Five software engineers building all of this.

Every engineer is going to have three monitors with their coding agents running. We still have good oversight, we still review all the PRs. But in terms of output, everyone is a manager of a small team of AI agents. I would say we're five, but in terms of software output, we're probably closer to a 25-person team.

One really interesting anecdote — our customer success manager, who's entirely non-technical but interfaces with enterprise customers daily, now takes on feature requests, especially the simple ones, gives them directly to a Cursor agent, and ships features in a fully autonomous manner. She gets back to the customer a few hours later and says, "Oh, it's shipped." She didn't even need to speak to the engineering team.

Karine Malata

Your first customer really believes in the founders first — their ability to solve their problem. Because when you start enterprise, you do have a version of your product, but it's going to evolve so much based on your first customer's requirements.

Our first customer was IAC, the publicly traded company. We were working with Ask Media Group, which had a very large amount of marketing content. Because IAC is a large publicly traded company, there were a lot of compliance requirements around what could go into their marketing content.

That problem was entirely solved using human agents because compliance guidelines are really hard to map to a traditional classifier. You can't give advice for legal defense, for instance — it's really hard to map that to a regular expression. So it was a very large team of human agents, part of a BPO, doing this work. And that was basically hurting their growth. It took eight months to land IAC. We really did it the hard way because we went enterprise from the very beginning.

Karine Malata

GPT-4 came out during the YC batch, which was really interesting. As we were running this pilot for our first customer, OpenAI was coming out with new models in the middle of the pilot, which was changing our cost structure by a 10x factor and also changing our performance quite a lot. It was really interesting to build in this world that was super dynamic.

Karine Malata

Around July 2024, the company was growing rapidly. Our revenue was doubling within the month and then doubling the month after. We had just wrapped up TrustCon, one of the largest trust and safety conferences in San Francisco. The day after, we were super tired. I was going back to the office on a Sunday afternoon on the bike lane, and a truck hit me.

I broke my spine, broke my leg, and was hospitalized for about ten days. I couldn't walk for about ten days. For a year and a half as a founder, you're moving so fast, working every day. And then all of a sudden you're in a bed and you can't move.

Three or four days after, Michael came to visit me in the hospital. He brought this Norman Foster book — the architect of Apple Park — as a gift. He was sitting next to my hospital bed holding the book, and we were both in silence because we didn't even know what to say. It was silence for a couple of minutes. And he laughed and said, "Well, this is going to make a really good scene in our IPO movie."

Karine Malata

Michael would tell me and repeat to me the story of Steve Wozniak, who went through the plane crash and then left Apple and went back to Berkeley. So there was a feeling that maybe this was going to be the end of the company and maybe we just needed to part ways.

But there was a really deep feeling that it was not the end. It's an interesting challenge — a hurdle you need to go over — but it just doesn't feel like the end at all. It feels like there's so much more to come. I definitely felt that. I know Michael felt that. And we got to walk again. Now I can walk again. And we learned that we definitely need to scale me so that hopefully this doesn't happen again.

Karine Malata

Michael and I had a very specific pair of skill sets in fraud. We understood what the industry looked like. We had a lot of issues with how these problems were solved at scale. From the beginning, we always felt a really strong sense of duty to put our very specific and quite rare pair of skill sets to the good of the industry.

It was never really about starting a company for any problem set. We wanted to solve that problem. We knew the technology was going to evolve. We were so lucky — LLMs got so good, now we have agentic systems, agent harnesses that can fully solve this problem end to end.

This strong sense of duty is what kept Michael and I going throughout the years. And I think it resonates deeply with customers. When they meet us, they see founders who are really deeply trying to solve the problem they're seeing on a day-to-day basis — because they've seen it before, and because it's something that is doable if you put enough care into building the right engineering system in the right ways.