WIRED Editor At Large Steven Levy sits down with Google DeepMind CEO Demis Hassabis for a deep dive discussion on the emergence of AI, the path to Artificial General Intelligence (AGI), and how Google is positioning itself to compete in the future of the workplace.
Director: Justin Wolfson
Director of Photography: Christopher Eusteche
Editor: Cory Stevens
Host: Steven Levy
Guest: Demis Hassabis
Line Producer: Jamie Rasmussen
Associate Producer: Brandon White
Production Manager: Peter Brunette
Production Coordinator: Rhyan Lark
Camera Operator: Lauren Pruitt
Gaffer: Vincent Cota
Sound Mixer: Lily van Leeuwen
Production Assistant: Ryan Coppola
Post Production Supervisor: Christian Olguin
Post Production Coordinator: Stella Shortino
Supervising Editor: Erica DeLeo
Assistant Editor: Justin Symonds
Director: Justin Wolfson
Director of Photography: Christopher Eusteche
Editor: Cory Stevens
Host: Steven Levy
Guest: Demis Hassabis
Line Producer: Jamie Rasmussen
Associate Producer: Brandon White
Production Manager: Peter Brunette
Production Coordinator: Rhyan Lark
Camera Operator: Lauren Pruitt
Gaffer: Vincent Cota
Sound Mixer: Lily van Leeuwen
Production Assistant: Ryan Coppola
Post Production Supervisor: Christian Olguin
Post Production Coordinator: Stella Shortino
Supervising Editor: Erica DeLeo
Assistant Editor: Justin Symonds
Category
🤖
TechTranscript
00:00It's a very intense time in the field. We obviously want all of the brilliant things
00:04these AI systems can do, come up with new cures for diseases, new energy sources,
00:08incredible things for humanity. That's the promise of AI. But also there are worries.
00:12If the first AI systems are built with the wrong value systems or they're built unsafely,
00:16that could be also very bad. Wired sat down with Demis Asabas, who's the CEO of Google DeepMind,
00:22which is the engine of the company's artificial intelligence. He's a Nobel Prize winner and also
00:27a night. We discussed AGI, the future of work, and how Google plans to compete in the age of AI.
00:33This is The Big Interview.
00:41Well, welcome to The Big Interview, Demis.
00:43Thank you. Thanks for having me.
00:44So let's start talking about AGI a little here. Now, you founded DeepMind with the idea
00:51that you would solve intelligence and then use intelligence to solve everything else.
00:57And I think it was like a 20-year mission. We're like 15 years into it. And you're on track?
01:02I feel like, yeah, we're pretty much dead on track, actually, as what would be our estimate.
01:06That means five years away from what I guess people will call AGI.
01:11Yeah. I think in the next five to 10 years, that would be my maybe 50% chance that we'll
01:16have what we're defined as AGI, yes. Well, some of your peers are saying two years,
01:21three years, and others say a little more. But that's really close. That's really soon.
01:27How do we know that we're that close? There's a bit of a debate going on at the
01:31moment in the field about definitions of AGI. And then, of course, dependent on that,
01:36there's different predictions for when it will happen. We've been pretty consistent from the very
01:40beginning. And actually, Shane Legg, one of my co-founders and our chief scientist, he helped
01:45define the term AGI back in, I think, early 2001 type of timeframe. And we've always thought about
01:51it as a system that has the ability to exhibit sort of all the cognitive capabilities we have as
01:58humans. And the reason that's important, the reference to the human mind, is the human mind is
02:03the only existence proof we have, maybe in the universe, that general intelligence is possible.
02:08So if you want to claim sort of general intelligence, AGI, then you need to show that
02:12it generalizes to all these domains. Is when everything's filled in, all the check marks
02:16are filled in, then when we have it, it's... Yes. So I think there are missing capabilities
02:22right now, you know, that all of us who've used the latest sort of LLMs and chatbots will know very
02:28well, like on reasoning, on planning, on memory. I don't think today's systems can invent
02:33you know, do true invention, you know, true creativity, hypothesize new scientific theories.
02:39They're extremely useful, they're impressive, but they have holes. And actually, one of the main
02:44reasons I don't think we're at AGI yet is because of the consistency of responses. You know, in some
02:51domains, we have systems that are, can do international maths Olympiad maths problems, you know, to gold medal
02:57standard with our alpha proof system. But on the other hand, these systems sometimes still trip up on high
03:02school maths, or even counting the number of letters in a word. So that, to me, is not what
03:08you would expect. That level of sort of difference in performance across the board is, you know, not
03:14consistent enough, and therefore shows that these systems are not fully generalizing yet.
03:17But when we get it, is it then like a phase shift that, you know, then all of a sudden things
03:23are different, all the check marks are checked. Yeah, you know, and we have a thing that can do
03:28everything. Are we then power in a new world? I think, you know, that again, that is debated. And
03:33it's not clear to me whether it's going to be more of a kind of incremental transition versus a step
03:40function. My guess is, it looks like it's going to be more of an incremental shift. Even if you had a
03:45system like that, the physical world still operates at the end with the physical laws, you know,
03:50factories, robots, these other things. So it'll take a while for the effects of that, you know,
03:56this sort of digital intelligence, if you like, to really impact, I think, a lot of the real world
04:01things. Maybe another, you know, decade plus. But there's other theories on that, too, where it
04:06could come faster. Yeah, Eric Schmidt, who I think used to work at Google, has said that it's almost
04:11like a binary thing. He says if China, for instance, gets AGI, then we're cooked. Because if someone
04:18gets it like 10 minutes before, you know, the next guy, then you can never catch up, you know,
04:24because then it'll maintain bigger, bigger leads there. You don't buy that, I guess.
04:29I think it's an unknown. It's one of the many unknowns, which is that, you know, that's sometimes
04:33called the hard takeoff scenario, where, you know, the idea there is that these AGI systems,
04:38they're able to self-improve, maybe code themselves, future versus themselves, that maybe they're
04:42extremely fast at doing that. So what would be a slight lead, let's say, you know, a few days,
04:48could be, could suddenly become a chasm, if that was true. But there are many other ways it could
04:53go, too, where it's more incremental. Some of these self-improvement things are not able to kind
04:58of accelerate in that way. Then, you know, being around the same time would not make much difference.
05:05But it's important. I mean, these issues on the geopolitical issues, I think the systems that are
05:10being built, they will have some imprint of the values and the kind of norms of the designers and
05:16the culture that they were embedded in. So, you know, I think it is important, these kinds of
05:21international questions. So when you build AI at Google, you know, do you have that in mind? Do you feel
05:29competitive imperative to, in case that's true, oh my God, we better be first. It's a very intense time
05:35at the moment in the field, as everyone knows. So many resources going into it, lots of pressures,
05:41lots of things that need to be researched. And there's sort of lots of different types of
05:45pressures going on. We obviously want all of the brilliant things that these AI systems can do.
05:50You know, I think eventually we'll be able to, you know, advance medicine and science with it,
05:54like we've done with AlphaFold, come up with new cures for diseases, new energy sources,
05:58incredible things for humanity. That's the promise of AI. But also there are worries, both in terms of,
06:05you know, if the first AI systems are built with the wrong value systems or they're built
06:09unsafely, that could be also very bad. And, you know, there are at least two risks that I worry a
06:14lot about. One is bad actors, whether it's individuals or rogue nations, repurposing general
06:20purpose AI technology for harmful ends. And then the second one is obviously the technical risk of AI
06:25itself, as it gets more and more powerful, more and more agentic. Can we make sure we, the guard
06:30rails are safe around it? They can't be circumvented. And that interacts with this idea
06:36of, you know, what are the first systems that are built by humanity going to be like? There's
06:40commercial imperative, there's national imperative, and there's a safety aspect to worry about,
06:47you know, who's in the lead and where those projects are.
06:50A few years ago, the companies were saying, please regulate us. We need regulation.
06:55And now in the US, at least, the current administration seems less interested in putting
07:02regulations on AI than accelerating it so we can beat the Chinese. Are you still asking for
07:08regulation? Do you think that that's a miss on our part?
07:11I think, you know, and I've been consistent in this, I think there are these, you know,
07:16other geopolitical sort of overlays that have to be taken into account. And the world's a very
07:21different place to, you know, how it was five years ago in many dimensions. But there's also,
07:26you know, I think the idea of smart regulation that makes sense around these increasingly powerful
07:31systems, I think is going to be important. I continue to believe that. I think though,
07:35and I've been consistent on this as well, it sort of needs to be international, which looks hard at
07:39the moment in the way the world is working, because these systems, you know, they're going to affect
07:44everyone and they're digital systems. So, you know, if you sort of restrict it in one area,
07:50that doesn't really help in terms of the overall safety of these systems getting built,
07:55you know, for the world and as a society. So that's the bigger problem, I think, is some kind
08:01of international cooperation or collaboration, I think is what's required. And then smart regulation,
08:06nimble regulation that moves as the knowledge about the research becomes, you know, better and
08:12better. Would it ever reach a point for you where you would feel, man, we're not putting the
08:17guardrails in, you know, we're competing that we really have to stop or you can't get involved in
08:23that?
08:23I think a lot of the leaders of the main labs, at least the Western labs, you know, we do,
08:30there's a small number of them and we do all know each other and talk to each other regularly
08:34and a lot of the lead researchers do. The problem is, is that it's not clear we have the right
08:39definitions to agree when that point is. Like today's systems, although they're, you know,
08:44they're impressive, as we discussed earlier, they're also very flawed. And I don't think
08:48today's systems are posing any sort of existential risk. But so it's still theoretical. But the problem
08:56is a lot of unknowns, we don't know how fast those will come. And we don't know how risky
09:00they will be. But in my view, when there are so many unknowns, then one I'm optimistic
09:05will overcome them. At least technically, I think the geopolitical questions could be actually
09:09end up being trickier, given enough time and enough care and thoughtfulness, you know, sort of using the
09:14scientific method as we, you know, approach this AGI point.
09:18That makes perfect sense. But on the other hand, if that timeframe is there, we just don't have
09:24much time. You know, we don't, we don't have much time. I mean, we're increasingly putting resources
09:29into security and things like cyber, and also research into controllability and understanding
09:38of these systems, sometimes called mechanistic interpretability. You know, there's a lot of
09:41different sub-branches of AI that are being invested in, and I think even more needs to
09:47happen. And then at the same time, we need to also have societal debates more about institutional
09:54building, how do we want governance to work, how are we going to get international agreement,
09:59at least on some basic principles around how these systems are used and deployed and also built.
10:05What about the effect on work, on the marketplace? You know, how much do you feel that AI is going
10:13to change people's jobs, you know, the way jobs are distributed in the workforce?
10:18I don't think we've seen, my view is if you talk to economists, they feel like there's not much
10:22has changed yet. You know, people are finding these tools useful, certainly in certain domains,
10:26like things like AlphaFold, many, many scientists are using it to accelerate their work.
10:30So it seems to be additive at the moment. We'll see what happens over the next five,
10:3410 years. I think there's going to be a lot of change with the jobs world. But I think as in the
10:40past, what generally tends to happen is new jobs are created that are actually better, that utilize
10:46these tools or new technologies. What happened with the internet, what happened with mobile.
10:49We'll see if it's different this time. Obviously, everyone always thinks this new one will be
10:53different, and maybe it will be. But I think for the next few years, it's most likely to be,
10:58we'll have these incredible tools that supercharge our productivity, make us really useful for
11:06creative tools, and actually almost make us a little bit superhuman in some ways in what we're
11:11able to produce individually. So I think there's going to be a kind of golden era of the next period
11:17of what we're able to do. Well, if AGI can do everything humans can do, then it would seem that
11:23they could do the new jobs too. That's the next question about what AGI brings. But even
11:28if you have those capabilities, there's a lot of things I think we won't want to do with a machine.
11:34I sometimes give this example of doctors and nurses. Maybe a doctor and what the doctor does
11:40and the diagnosis, one could imagine that being helped by an AI tool or even having an AI kind
11:46of doctor. On the other hand, like nursing, I don't think you'd want a robot to do that. I think
11:51there's something about the human empathy aspect of that and the care and so on that's particularly
11:57humanistic. I think there's lots of examples like that, but it's going to be a different world for
12:03sure.
12:03If you're, you would talk to a graduate now, what advice would you give to keep working
12:10through the course of a lifetime, you know, in the age of AGI?
12:16My view is currently, and of course this is changing all the time with the technology developing,
12:23but right now, you know, if you think of the next five, 10 years as being the most productive people
12:29might be 10x more productive if they are native with these tools. So I think kids today, students
12:35today, my encouragement would be immerse yourself in these new systems, understand them. So still,
12:42I think it's still important to study STEM and programming and other things so that you understand
12:46how they're built. Maybe you can modify them yourself on top of the models that are available.
12:50There's lots of great open source models and so on, and then become, you know, incredible at
12:56things like fine tuning, system prompting, you know, system instructions, all of these
13:01additional things that anyone can do and really know how to get the most out of those tools and
13:07do it for your, you know, your research work, programming, things that you're doing on your
13:11course, and then come out of that being incredible at utilizing and those new tools for whatever it is
13:17you're going to do.
13:18Let's look a little beyond the five and 10 year range. Tell me what you envision when you look at
13:25our future in 20 years, in 30 years, if this comes about. What's the world like when AGI is
13:32everywhere? Well, if everything goes well, then we should be in an era of what I like to call sort
13:38of radical abundance. So, you know, AGI solves some of these key, what I sometimes call root node
13:45problems in the world facing society. So a good one examples would be curing diseases, much healthier,
13:50longer lifespans, finding new energy sources, you know, whether that's optimal batteries and
13:57better, you know, room temperature superconductors, fusion. And then if that all happens, then, you know,
14:04we should be, it should be a kind of era of maximum human flourishing where we travel to the stars
14:09and colonize the galaxy. That's, you know, I think the beginning of that will happen in the
14:17next 20, 30 years if the next period goes well. I'm a little skeptical of that. I think we have
14:23an unbelievable abundance now, but we don't distribute it, you know, fairly. I think that
14:29we kind of know how to fix climate change, right? We don't need an AGI to tell us how to do it. Yeah,
14:34we're not doing it. I agree with that. I think we've been, as a species, as a society,
14:39not good at collaborating. And I think climate is a good example. But I think we're still operating,
14:45humans are still operating in a zero-sum game mentality. Because actually, the earth is quite
14:50finite relative to the amount of people there are now in our cities. And I mean, this is why our
14:56natural habitats are being destroyed. And it's infecting, you know, wildlife and the climate and
15:01everything. And it's also partly because people are not willing to accept. We do not have to figure
15:07out climate, but it would require people to make sacrifices. And people don't want to. But this
15:12radical abundance would mean, would be different. We would be in a, finally, like, it would feel like
15:19a non-zero-sum game. How will we get jarred into that? Like, you talk about diseases. I'll give you an
15:23example. We have vaccines, and now people are, you know, some people think we shouldn't use them. Let me give you
15:27a very simple example. Water access. This is going to be a huge issue in the next 10, 20. It's already
15:32an issue. Countries and different, you know, poorer parts of the world, drier parts of the world, also
15:36obviously compounded by climate change. We have a solution to water access. It's desalination.
15:41It's easy. There's plenty of sea water. Almost all countries have a coastline. But the problem is
15:46it's salty water. But desalination, only very rich countries, some countries do do that, use desalination
15:52as a solution to their freshwater problem. But it costs a lot of energy. But if energy was essentially
15:57zero, there was renewable, free, clean energy, right, like fusion, suddenly you solve the water
16:03access problem. Water is, who controls a river or what you do with that does not, becomes, you know,
16:09much less important than it is today. I think things like water access, you know, if you run forward 20
16:15years and there isn't a solution like that, could lead to all sorts of conflicts, probably. That's the
16:19way, the way it's trending, especially if you include further climate change. And there's many,
16:23many examples like that. You could create rocket fuel easily because you just separate that from
16:27seawater, hydrogen and oxygen. It's just energy, again. So you feel that these problems get solved
16:34by AGI, by AI, then we're going to, our outlook will change and we will be... That's what I hope.
16:43...less avaricious. Yes, that's what I hope. But it would, that's still a secondary part. So the AGI
16:48will give us the radical abundance capability, technically, like the water access. I then hope,
16:53and this is where I think we need some great philosophers or social scientists to be involved,
16:58that should hopefully shift our mindset as a society to non-zero sum. You know, there's still
17:04the issue of, do you divide even the radical abundance fairly, right? Of course, that's what
17:09should happen. But I think there's much more likely, once people start feeling and understanding
17:13that there is this almost limitless supply of raw materials and energy and things like that.
17:20Do you think that, you know, driving this innovation by profit-making companies
17:24is the right way to go? We're most likely to reach that optimistic high point through that?
17:29I think it's the current, you know, capitalism or, you know, it's the current or the Western
17:33sort of democratic kind of, you know, systems have so far been proven to be sort of the best
17:40drivers of progress. So I think that's true. My view is that once you get to that sort of stage
17:46of radical abundance and post-AGI, I think economics starts changing, even the notion of value and money.
17:53And so again, I think we need, I'm not sure why economists are not working harder on this if
17:57they, maybe they don't believe it's that close, right? But, but, but if they really did that,
18:02like the, the AGI scientists do, then I think there's a lot of economic, new economic theory that's
18:07required. You know, one final thing, I actually agree with you that this is so significant and it's
18:15going to have a huge impact. But when I write about it, I always get a lot of response from people
18:21who are really angry already about artificial intelligence and, and, and what's happening.
18:28Have you tasted that? Have you gotten that pushback and, and, and anger by a lot of people? It's
18:34almost like the industrial revolution, people fighting that. Yeah. I mean, I think that anytime
18:38there's, I haven't personally seen a lot of that, but obviously I've, you know, read and heard a lot
18:42about, and it's very understandable. That's all, that's happened many times. You say industrial
18:46revolution when there's big change, a big revolution. And I think this will be at least as
18:50big as the industrial revolution, probably a lot bigger. That's surprising. There's unknowns,
18:54it's scary, things will change. But on the other hand, when I talk to people about the passion of
18:59why I'm building AI, which is to advance science and medicine and understanding of the world around
19:03us. And then I explained to people, you know, and I've demonstrated, it's not just talk is here's
19:08alpha fold, you know, Nobel prize winning breakthrough can help with medicine and drug discovery.
19:13Obviously we're doing this with isomorphic now to extend it into drug discovery and we can cure
19:17diseases, terrible diseases that might be afflicting your family. Suddenly,
19:20many people are like, well, that's, of course we need that. It would be immoral not to have,
19:24have that if that's within our grasp and, and, and the same with climate and energy, you know,
19:30many of the big societal problems. It's not like we're, you know, that we, we know we've talked
19:35about, there's many big challenges facing society today. And I often say, I would be very worried
19:40about our future. If I didn't know something as revolutionary as AI was coming down the line to help
19:46with those other challenges. Of course, it's also a challenge itself, right? But at least it's one
19:51of these challenges that can actually help with the others. If we get it right.
19:54Well, I hope I, your optimism holds out and is justified. Thank you so much.
20:00I'll do my best. Thank you.