With “Nexus,” Yuval Noah Harari gives AI the big-history treatment

With “Nexus,” Yuval Noah Harari gives AI the big-history treatment

Note: Text has been edited and does not match audio exactly.

Kat Johnson: Hi listeners, I'm Audible Editor Kat Johnson, and today I'm honored to be speaking with the historian, professor, philosopher, bestselling author, and one of the greatest minds of our time, Yuval Noah Harari. He's here to talk about his new book, . Welcome, Yuval.

Yuval Noah Harari: Hello, it's good to be here.

KJ: Thank you so much for being here. It's such a thrill for me. Your first book, , was a global publishing phenomenon, and one of the most influential nonfiction books of all time. Its mega-macro perspective of human history made a huge impact on me, on folks like Barack Obama, and on so many of our listeners and people all over the world. So, on behalf of all of us at Audible, thank you so much for being here. Thinking back to Sapiens, did you have any idea at the time it was going to become such a phenomenon?

YNH: No, not at all.

KJ: Not at all?

YNH: It started as a kind of book for college students, an introduction to the history of the world, condensed, the whole of history, into 400 or 500 pages. And later on, it was quite difficult to find a publisher. My husband and I worked on it for like three years just to find a publisher. It was a big surprise that it became such an international bestseller.

KJ: One of the things that I think sticks out about it and is so dear to my heart is your focus on storytelling. You posit that stories and shared beliefs fueled the large-scale cooperation that enabled humanity's rise. And stories are the basis of everything that we do at Audible. Where did this insight about storytelling come from? How did you land on the idea of stories as what makes us so unique?

YNH: I had the impression that everybody knows it. This is also why I was surprised that it got so much traction and attention. Basically, all great human endeavors are based on large-scale cooperation. And all cooperation is based on stories. It's not just the obvious stuff like religion, it's also things like money and corporations. If you think what money is, money is a story. Dollars have no objective value. A Bitcoin has no value. Its value comes from the stories that people invent and spread and believe. When you start a company, to convince investors to give you money, you need to tell them a story. And if you launch a war and you want soldiers and civilians to join the war effort, you tell them a story. The reason why we control the planet and not the chimpanzees or the elephants is because we are much, much better storytellers.

Chimpanzees sometimes fight, but only in very small numbers. You could never convince 1,000 chimpanzees or a million chimpanzees to cooperate together on a big project, because in their case cooperation is based on personal familiarity, on intimate connections. A million humans cannot know each other personally, so the only way you can convince them to join a war effort or a religious festival or anything else is by telling them a convincing story.

KJ: That's so interesting. And then your new book, Nexus, is looking at information, which in the book you define very specifically. We have more information than ever before. With the advent of AI, we're going to have exponentially more. And yet, as you note, all this information is not making us any wiser or any more cooperative.

YNH: No.

KJ: What do you think people need to understand about information, and how did you know this was going to be the topic of this book?

YNH: The basic question that leads the book is, if humans are so wise, why are we so stupid? Why are we making so many self-destructive decisions? Like with nuclear weapons or with AI, we know these things can escape our control, we know they could destroy us, and nevertheless we go ahead and produce them. A lot of traditions point a finger at human nature, that there is something deeply flawed in human nature that makes us self-destructive. I don't think this is the answer. And this is not the answer which Nexus gives. I think the problem is not in human nature, the problem is in our information. Most humans are good. But when you give good people bad information, they make bad decisions. And the thing is that, as history unfolds, you don't see any improvement in the quality of the information we have. Modern societies are as susceptible as Stone Age bands and tribes to delusions, mass hysteria, and mass psychosis.

If you think about the big nightmares of the 20th century, like Nazism and Stalinism, you don't see any clear progress in, again, the quality of our information, of our stories, of our mythologies and ideologies, over time. And there is a naïve view of information, which is prevalent in places like Silicon Valley and which is extremely dangerous, which says that basically we just need more information and everything will be okay. This is the basic ideology of Silicon Valley, which assumes wrongly that information is truth, or information is the raw material of truth. And the more information you have, the more knowledge you have. This is absolutely wrong. Most information is junk. Most information in the world is junk information. And the main role of information, the main function of information, is not to tell the truth. The main function of information is to connect. Information is connection, not truth.

"Most humans are good. But when you give good people bad information, they make bad decisions."

To give one simple example, if you think about music as information, music has no truth value whatsoever, which is why you don't have fake music. You have a lot of bad music, but not fake music. There is no such thing. Now, music connects people. People in a club dance together to the same tune. People in church clap together or chant together to the same music. It connects them. Now, unfortunately, the easiest way to connect people is not with the truth, but with fictions and fantasies. In a completely kind of free-market free battle of information, the truth will lose. Why? Because the truth is costly. It takes a lot of time and effort and money to research, to create a truthful account, whereas fiction and fantasy is very cheap. You don't need to research anything, you just say the first thing that comes to your mind. The truth tends to be complicated because reality is complicated. And people don't like complicated stories. They prefer simple stories most of the time. And fiction can be as simple as you would like it to be.

Finally, the truth is often painful. It's unattractive. Whether on a personal level, the truth about myself, or whether on a national level, the truth about our nation. A politician who would say the truth, the whole truth, and nothing but the truth about our nation is unlikely to win the next election. Because it will be quite painful to confront the realities. And it's also true at the individual level. This is why we have therapists. If the truth about myself was so pleasing, I wouldn't need to go to therapy to kind of uncover it and deal with it. So, in a completely kind of free fight, the truth tends to lose.

One last thing I'll say about it is that there is this naïve notion that, over time, information must improve, must become more truthful, because the truth conveys power. Like, if you know the facts of nuclear physics, you can produce an atom bomb. And if you believe all kinds of fictions and fantasies about physics, you will not produce an atom bomb. So, truth confers power, and therefore in the long term, organizations, institutions, countries that adhere to the truth will win. Now, again, this is very naïve. Because in order to produce an atom bomb, it is not enough to know the facts of nuclear physics. The other thing you need to produce an atom bomb is millions of people cooperating on the project. You need not just nuclear physicists, you also need miners who would mine uranium, and you need builders who would build the nuclear reactor, and you need farmers who would grow the food to feed all these engineers and miners and physicists. And so to produce an atom bomb, you need millions of people to cooperate.

It's always a mythological or ideological story that inspires millions of people to cooperate, and there is a big difference between ideology and physics. If you try to produce an atom bomb and you ignore the facts of nuclear physics, the bomb will not explode. But if you try to create an explosive ideology that will motivate millions of people and you ignore the facts, your ideology has a very big chance of exploding with a big bang.

KJ: You mentioned the atom bomb, and I'm curious, that's another obviously world-altering technology. This moment we're in with AI feels I think to most people so huge, so pivotal, so potentially game-changing. As a historian, is that feeling correct? Is this completely unprecedented? Is it going to be as big as we think? Bigger? What do you think about that?

YNH: It is unprecedented. In many ways it's bigger than the atom bomb. The atom bomb was a bomb. Very, very powerful, more powerful than previous bombs, but it was still a bomb. And it empowered humans because no matter how powerful the atom bomb was, it could not make any decisions by itself. The decision to bomb Hiroshima and Nagasaki was taken by Truman and his advisers, not by the bomb. The bomb could not decide anything. AI is different. AI can make decisions by itself, and it can even create new ideas by itself. Right now, as we speak, there is a race to produce autonomous weapon systems, and an autonomous weapon system powered by AI can decide by itself who to bomb. And it can even invent new kinds of bombs and new kinds of strategies.

The most important thing to realize about AI is that AI is not a tool in our hands. AI is an independent agent which can escape our control, and we can become a tool in its hands. Even today with very, very primitive AIs, we still haven't seen anything in terms of AI evolution. We are at the very beginning of a very long process of AI evolution, and already today there are millions of AIs out there making decisions about us. You apply to a bank to get a loan, it's an AI deciding whether to give you a loan. You are on trial, it's an AI deciding or influencing the decision what will your sentence be. Whether to send you to six months in prison or six years. It's increasingly AIs making these crucial decisions about human lives.

Also, what information we encounter online and in the world, it's increasingly AIs deciding the fate of what will be the top story in the news and what books will reach the attention of human beings. I'm here with you now on a book tour. I published Nexus. I try to sell books, and my number one customer is not a human being, it's an algorithm. If I can get the help or grab the attention of the algorithm, that's it. The humans will follow. The algorithms know how to catch, and some people would say manipulate, the attention of human beings. So, we are in a new world where there are more and more nonhuman agents making decisions, crucial decisions, about our lives. And, again, we are just at the very, very beginning of this process.

KJ: You mention that humans are incredibly fallible. We do believe mythology and stories that are persuasive, and we don't have the ability to take in as many data points as algorithms can. So, this intelligence, this artificial intelligence, this alien intelligence, can certainly provide us with lots of help in massive challenges that we face. Do you feel like this is a Faustian bargain? Should we take it?

YNH: I mean, it's not really a question, because it's happening. AI has enormous positive potential. That's obvious in so many fields. It can provide us with the best health care in history. With AI the poorest people on the planet, within 10 or 20 years, can get better health care than the richest people got in the past. If you think about climate change, AI could be the key to preventing catastrophic climate change, because we are going to need new eco-friendly technologies to deal with this crisis, and AI can be very helpful in creating these technologies.

So, there is enormous positive potential. The question is not how do we stop all development of AI. It's not going to happen, and it's not something we want to happen. It's good to develop AI. The question is, how do we do it safely? It should be like with other industries. Like, it's obvious to us that if we develop a new car model, we have to invest, I don't know, 20 percent or 30 percent of the investment in making sure that the car is safe. That it has seatbelts and air cushions, and if there is an accident, the car doesn't blow up and just kill everybody in sight. When we develop a new medicine, it's obvious that you cannot start selling this new medicine unless you went through a very rigorous process of making sure that it's safe.

"The truth tends to be complicated because reality is complicated. And people don't like complicated stories."

It should be the same with algorithms, with AIs. They have a much more destructive potential than any medicine or any car. And at present, basically, corporations can develop any AI they want and just release it out into the world without going through any safety checks. This is very, very dangerous. They tell us, "Ah, if there is a problem, we can deal with it down the road." No, you can't. When they taught me how to drive a car, the first thing they taught me was how to press the brakes. Only after I knew how to press the brakes, the teacher taught me how to press the accelerator. And I never skied in my life, but friends who ski, they tell me, yes, when they teach you how to ski, it's the same thing. The first thing they teach you is how to stop. Only after, they teach you how to go forward. Because if they did just teach you how to go forward and you're skiing down the slope, and then they try to shout to you, "And this is how you stop," it's not going to work. So it should be the same with AI. First we need to make sure this thing is safe, we know how to control it, we know how to stop it. And only afterwards we can press the accelerator and go ahead.

KJ: Very often you hear this kind of sci-fi idea that machines will become conscious and take over. You present a very different scenario that I think is just as chilling, at the end of Nexus, where you say that the machine intelligence could take over without any consciousness at all, and potentially could obliterate consciousness in general. I find that terrifying and hard to grasp. I was wondering if you could unpack that a little bit.

YNH: Well, there is a lot of confusion between consciousness and intelligence, much of it coming from science fiction, and we should be very clear about the difference. Intelligence is the ability to achieve goals and to solve problems on the way to achieving your goal. Your goal can be winning a chess game, or driving from point A to point B. You're intelligent if you're able to overcome the various problems and difficulties on the way to achieving your goal. This is intelligence. Consciousness is the ability to feel things, to feel pain and pleasure and love and hate. This is consciousness. Now, humans and other animals—dogs, pigs, whatever—we solve problems based on our feelings. Our feelings are the way we solve most problems in life. This is why we tend to confuse the two. And in so many science fiction movies, you see this, that as the computer or the robot becomes more and more intelligent, it suddenly gains consciousness and it falls in love with the human, or the human falls in love with the robot, or whatever.

So far, nothing of this is happening. We now have AIs that are far more intelligent than us in narrow fields, like playing chess or playing Go, whatever. And they don't have any feelings at all. When they win a game of chess, they don't feel joyful. When they lose, they don't cry. They are not sad. In this sense they are an alien type of intelligence. AI usually stands for, it's an acronym for artificial intelligence. But I think it's better to think about it as an acronym for alien intelligence. Alien not in the sense that it's coming from outer space. It's not. Alien in the sense that it's a very, very different type of intelligence than our intelligence. It solves problems, it pursues goals in radically different ways than human beings.

To give one example, one famous example, there is the game of Go, which is a strategy board game, a bit like chess, but far more complex than chess. It was developed in ancient China more than 2,000 years ago, and it was considered, and still is considered, in East Asia a cultural treasure; one of the four basic arts that any civilized person needs to master is playing Go. And for more than 2,000 years, tens of millions of people in China, in Korea, in Japan, they played Go. And entire philosophies developed around how to play Go. It was seen as a kind of a mirror of life. And then in 2016, scientists, engineers developed this AI program, AlphaGo, that taught itself how to play Go. It did not rely on any help from humans. Humans just explained the rules of the game, everything else the computer taught itself.

Within a very short time it became so good it defeated the human world champion. But the amazing thing was how it defeated the human world champion. It deployed a strategy which was totally alien. When the human commentators first saw it play, they said, "This must be a mistake. This is nonsense. Nobody plays Go like that." But it proved to be far better than anything humans thought about for more than 2,000 years. Tens of millions of people investing so much effort, and it turned out that in the landscape of Go, for 2,000 years humans were exploring just a single island. There are entire regions in the land of Go that no humans ever thought of exploring because it was just beyond the way that humans think. And then this AI came along, and within a few weeks uncovered these hidden areas, regions in the land of Go.

And this is alien intelligence. It thinks in ways which just no human thinks like that. And if it's a game of Go, you say, "Okay, so it doesn't matter. It's just a board game." But we are likely to see it happening in more and more places, in the military, in finance, in religion, in politics, that AIs will invent new ideas, new strategies, new medicines, new weapons that no human being ever thought about. And we will have to adapt to a new kind of society in which there are two different kinds of agents. We have been used for thousands of years to living in societies in which all agents are humans. Now we have to adapt to a society in which more and more powerful agents are alien intelligences making decisions about us. And the big question is, who is going to adapt to who? We are organic, they are inorganic. Would we have to start adapting to a nonorganic world, or vice versa?

Now, what do I mean by organic and nonorganic? Organic beings, like humans, like horses, like dogs, we live by cycles. Day and night, winter and summer, sometimes we are active, sometimes we need rest. AIs are inorganic. They don't live by cycles. They are relentless, tireless, they are always on. They never need to rest. They never need a vacation. If you look around, you will see that more and more of the world is being run on an inorganic basis. If you think about the news cycle, so the news cycle is now being run by algorithms which never rest, which is why there is no rest in the news cycle. It's always on. If you're a journalist, if you're a politician, if you sleep, you are left behind in the game. And you have to sleep because you're organic, so you are left behind. If you think about finance, it's an amazing fact about Wall Street that the market is open only Mondays through Fridays 9:30 in the morning to 4:00 in the afternoon, and it's also on vacation during Christmas and Martin Luther King Day and a couple of other holidays. So, if a new war erupts on Friday at five minutes past four, the market will react to it only on Monday morning, because it's on vacation during the weekend. This is human time. This is organic time.

As algorithms increasingly dominate finance, finance shifts to a relentless timetable where it's on 24 hours, 365 days a year. And it's crucial to realize, if you force an organic entity to be constantly on, it eventually collapses and dies. We cannot do that. Now, the most abused word in English, or at least in American English, is the word “excited.” People think that excited means happy. "Oh, I'm so excited to meet you." "Oh, this new book is so exciting." Excitement is good for organic beings only up to a point. Excited means that the nervous system is on. If you keep an organic entity excited all the time, it collapses and dies. And if you look at politics, for instance, it's far too exciting for us. We need boring politicians that will give us time to relax.

KJ: Speaking of rest and cycles and consciousness, I want to ask you about your meditation practice, which you're known for. You practice Vipassana meditation, which is different from other forms, like mindful and transcendental.

YNH: I didn't practice anything else, so I can't comment on the differences.

KJ: But can you tell us a little bit about your practice and if you think that it informs, because your work is so, you know, you take this big sweep. Do you think that comes from this time you spend meditating? Or how has it affected your approach?

YNH: Absolutely. I mean, we are flooded, in a way, with too much information. And we need an information diet the same way that we have a food diet. We know that too much food isn't good for us. We need time to digest. It's the same with information. Too much information isn't good for us. We need information fasts, when we just give ourselves time to digest and to detoxify. And meditation helps with that. So, I meditate for two hours every day, and every year I go for a long retreat of between 30 days and 60 days when I don't have a phone, I don't connect to the internet, I don't even read or write. I just meditate. And meditation, at least Vipassana meditation, is just observe. Just feel, observe what is really happening right now.

Our minds are like a factory of stories and fantasies and delusions that come between us and the world. When I try to understand my life, my relationships, when I try to understand the world, US politics, the global economy, whatever, there are stories in my mind that come between me and the reality. I almost never observe the actual reality. I almost always observe the stories that my own mind creates about these things. Many of these stories, of course, my mind absorbed from other places. From the media, from fairy tales, from mythologies, from political propaganda. And we kind of live cocooned inside this world of stories and illusions, which are good for some purposes. But, ultimately, they make it difficult, if not impossible, to see reality as it is.

In meditation, at least Vipassana meditation, there is just one question: What is really happening right now? Leave aside all the stories. What is actually happening right now? And it starts with the simplest things, which are not simple at all. Still, they are the simplest things. The first instruction I got in meditation was, just close your eyes and focus your entire attention on the breath coming in and out of your nostrils. Don't try to control it. This is not a breathing exercise. You're just trying to notice reality. What is happening in this moment? In this moment, is the breath coming in or is the breath going out? So simple. And I tried to do it, I couldn't. It took maybe 10 seconds before some story popped up in my mind, some memory, some fantasy, some plan. And I started rolling in this memory or thought, and it would take me like two, three minutes to realize, "Hey, I've lost touch with my breath. I'm just now thinking about something that happened a week ago. Okay, let's go back to the breath." And again, for 10 seconds I could try and notice, is the breath really coming in or is it really going out? And again, my mind would be hijacked by some story.

"We know that too much food isn't good for us. We need time to digest. It's the same with information."

This was in a way the greatest discovery of my life, that I have no control of my mind and that I have extremely limited ability to actually observe reality. People say, "What's the importance of the breath?" It's not important on its own. But if I can't focus for 10 seconds on the reality of the breath coming in and out of my nose, how am I supposed to understand the reality of US politics, of the conflict between Israelis and Palestinians, of the global economy? I will be trapped in these stories and fantasies all my life. So, meditation is just this simple training again and again and again. Leave aside all the stories. What is really happening right now?

KJ: And now you can do it for two hours.

YNH: Uh, no, no. During two hours there are so many distractions, I can't. But, again, it's a practice. For two hours every day, again and again, I try. "Okay, forget about the interview you gave in the morning. Forget about the event you're going to have in the evening. What is really happening right now?"

KJ: We could all stand to learn to do that better for sure. I also read in a New Yorker profile of you that you swim laps while listening to audiobooks. Tell us about that. You still do that?

YNH: Yeah.

KJ: You wear special headphones?

YNH: Yeah, I have these special headphones that they don't connect to the ears, but they work with the bones in your forehead, and you can actually hear very well under water that way. I also read from time to time, but when I swim, when I walk, when I drive, I have time to listen to books. And it's also a different experience. It has advantages and disadvantages. One of the main things that when you listen, you can't stop, or it's more difficult to stop. You basically have to allow the text, the narrative to kind of flow over you. And so it's a different experience than reading. I wouldn't do it, for instance, with a philosophy book that I have to stop like every couple of paragraphs and think about it. But especially with narratives, it works very, very well for me.

KJ: Do you have any favorites you could share with our listeners? Anything you've been listening to lately that you love, or an all-time favorite?

YNH: I listened to .

KJ: I was going to bring that up.

YNH: Oh, it's such a good book. It's such a good book.

KJ: Benjamin Labatut, incredible.

YNH: Yeah, incredible.

KJ: Incredible writer, and then he dramatizes the scene you were talking about, the AlphaGo [game] with Lee Sedol. Which is such a great bookend to that book.

YNH: Absolutely.

KJ: I love the narration on that one as well.

YNH: Yeah. And I've recently heard a very good book about the Emperor Justinian. The East Roman Byzantine emperor in the sixth century. It's called . I've never read any book about Justinian previously. It was really, really good.

KJ: Okay, I'll have to check that one out. And then your own work is fabulous in audio. You've had narrator Derek Perkins on your previous books. For Nexus, you have a new narrator, Vidish Athavale. He's incredible. Did you have anything to do with his casting? Do you have any thoughts on his performance, if you've heard it?

YNH: I mean, they showed me the different options, but this is not my expertise. I mean, basically it's me giving the interviews and it's my name on the cover of the book, but there are like 20 people actually working on everything. My husband and me, we started a social impact company called Sapienship a couple of years ago, and we now have 20 people on our team in different parts of the world, from the US to China. And all these things, like deciding on who would be the narrator of the book, this is above my pay grade [laughs].

KJ: It might be your husband's pay grade, perhaps.

YNH: Yeah.

KJ: I mean, you are doing all these things. I think Sapienship sounds super interesting. You're obviously out promoting the book. Do you have sort of a typical day, if there is such a thing?

YNH: On a book tour, no. At home, yeah, absolutely, I try to keep to my routines. I will wake up in the morning, I will meditate for one hour, then I will make breakfast, which is almost always porridge of one kind or the other. Then I work, I sit in front of the computer and just write for a couple of hours. And then there is another meditation session in the afternoon. And evenings are off like meeting friends or watching something on TV.

KJ: That sounds great.

YNH: Yeah, a quiet life.

KJ: Has all the fame and attention that you've gotten since Sapiens blew up, has that changed how you approach your work or your life in general?

YNH: Oh, it changed a lot of things. When I wrote Sapiens, I was this kind of anonymous professor in Jerusalem with a very kind of quiet and provincial life. And now I globetrot around the world and appear on television to talk about AI and the latest war or the latest ecological disaster. Maybe the biggest thing is that I have to think even more carefully about what I say and what I write, because when so many people listen to you, then you have a big responsibility. Your words are like seeds that go into the minds of so many people.

"Humans want power to some degree. But humans are also genuinely and deeply interested in the truth... It's a need."

One thing that I notice, again, talking a little about the current political climate, is that too many politicians and public figures, they have this ideal of authenticity, maybe they pretend that they just say the first thing that comes to their mind. And I know this is good for therapy, but politics isn't therapy. When you're speaking publicly and millions of people are listening to you, we don't need authenticity. We need responsibility. Observing my own mind for two hours every day and for these long meditation retreats, I know that the mind is full of junk. So many of my own thoughts that pop up in the mind, they are junk. It's a good idea to share this junk with my therapist and maybe with a few private friends when you let your guard down and you just say the first thing that comes to your mind. But as a public figure, you should not share the junk in your mind with your audience. Because it just then infects them as well. We need to be very careful about what we say in these situations. So, as I’m finding myself as a public intellectual, as a public speaker, I try to be very, very mindful about that.

KJ: Well, I appreciate that, because I know a lot of people are going to be listening. I was saying to my producer earlier, in my six years of working at Audible interviewing so many incredible writers and creators, I just don't think I've had anywhere near this much interest from family, friends, colleagues, everyone, that I was going to be talking to you today. So, it's just so exciting to be able to sit down with you, and I really appreciate that you've taken this time, and knowing that you're so mindful, I especially appreciate it.

I do want to touch on one more question for all the people who I know are listening. At the beginning of Nexus, you wonder why we're so good at accumulating information and power, but far less successful at acquiring wisdom. Can you share any hope or insights that you think could be useful for people to hear? Especially decision makers, young people, people who want human history to thrive and improve.

YNH: One of the big problems now in the world is that a lot of people hold a very cynical view of humanity. You see it on the left with Marxists and on the right with populists. There is one thing that Donald Trump agrees with Karl Marx. They view humans as obsessed with power. They think humans only want power, that all human interactions are power struggles, and that in pursuit of power people fight and hurt each other. This is a very cynical view of humanity, and it's a wrong view. Yes, humans want power to some degree. But humans are also genuinely and deeply interested in the truth. We really want to know the truth about ourselves and about the world. It's a need. Because you cannot be happy without knowing the truth about your life, about yourself.

No matter how many lies and how many delusions and fantasies we grasp, we still we have this deep need for the truth. And this gives hope. The previous view, obsession with power, is a cynical view of humanity. I think the correct view is to have a tragic view of humanity: that in pursuit of truth, out of ignorance, we hurt other people and we also hurt ourselves. It's a tragedy, but there is hope in this tragedy because it implies that there is still this really deep need for truth, which is shared by all humans. So, no matter how much fake news and conspiracies and fantasies and delusions flood the world, you can always kind of reconnect to the deep human yearning for truth.

KJ: Thank you. Definitely inspiring words, I appreciate that. Listeners, Nexus by Yuval Noah Harari is available on Audible now. Listen and be at the nexus of the conversation on AI and the future of humanity. Yuval, thank you so much for being here today.

YNH: Thank you.

KJ: It was such a pleasure. Thank you.