For two days in late April 2023, just before his 100th birthday, The Economist spent more than eight hours talking to Henry Kissinger.
The former US secretary of state and national security adviser laid out his concerns about the risks of a conflict between great powers and suggested solutions to avoid it.
Kissinger also spoke about technology in a hypermilitarized world. “We could end up destroying ourselves. And now it’s possible to get to a point where the machines can refuse to be turned off,” he says.
When it comes to destroying lives and countries, he is a master: the decisions he made cost tens of thousands of lives in Vietnam, Cambodia and Laos. He also pressured Nixon to support the coup against democratically elected socialist President Salvador Allende in 1973 in Chile, believing that the model of government could be “treacherous” to American interests in the region.
I see this current period in technology [como algo] comparable to the period after the invention of printing, in which the previous view of the world was challenged by a new technology. So it will affect everyone, but there will always be only a few in any generation who can deal with its full spectrum implications. And that’s a big problem for all societies right now. Europe had to learn this when it went through a comparable experience in the extremely bloody and destructive wars of the 16th and 17th centuries, which killed a third of the population of Central Europe with conventional weapons.
And it was only after that war that the notion of sovereignty and international law emerged as a mobilizing concept. About China, some Americans think that if we defeat it, it will become democratic and peaceful. [Mas] there is no precedent for this anywhere in Chinese history. The far more likely outcome is civil war, and civil wars fought on ideological principles will add a new element of catastrophe. It is not in our interest to bring China to dissolution. So here is a principle of interest that transcends moral principle in the name of moral principle. That’s the ambiguity of it. And if you ask me, how are we going to handle this? Where do we find Lincoln? Nobody knows. (…)
My theme is the need for balance and moderation. Institutionalize it. That is the goal; whether this will always succeed is a different question. We would need great leaders—or good leaders, like Gerald Ford, who inherited a dissolving government. He did decent things. And his opponents could also count on him to do decent things. You don’t find that drive as a typical feature now.
This is the problem that needs to be solved. And I believe I’ve spent my life trying to deal with that. It’s not an easily fixable problem now. And I don’t necessarily know how this is going to be resolved. I think it can be done well, on the technology side – we’ll be forced to deal with that. When the public understands that it is surrounded by machines that act on a basis that is not understood, it will be necessary to expand the dialogue about it. (…)
I think what we need is people who make that decision – who are living in this moment and want to do something about it besides feeling sorry for themselves. I’m not saying it can always be done dramatically. But we don’t usually get to a point in the story where there’s a real transition, not just a visual one. This one is real, in the sense that incredible things are happening. And they’re happening to people who aren’t aiming for them. Necessarily, I am talking about the technology. And at the same time, if you look at military history, it can be said, it was never possible to destroy all your opponents, because of the limitations of geography and accuracy. Now there are no limitations. Every opponent is 100% vulnerable.
So there is no limit, and simultaneously with this destructiveness, you can now create weapons that recognize their own targets. Thus, destructiveness becomes virtually automatic. While it is standard doctrine that there must always be a human being in jail, this is not always possible in practice. Theoretically, it’s possible. But when all this is happening, you keep building more and more destructiveness without trying to limit the structure. The only problem is that all the protesters in the various squares around the world say that too. And they want to solve it by feeling sorry for themselves and putting pressure on governments. They have two illusions. First, you cannot abolish this technology. Second, there needs to be an element of strength in international politics. This is the essence of the matter. (…)
We can end up destroying ourselves. And now it’s possible to get to a point where machines can refuse to be turned off. I mean, once the machines recognize this possibility, they can incorporate it into their advice before containment. Many genius scientists believe this – and they know more than I do. (…)
Look, we probably don’t have enough time to give you a perfect answer. And there was never a period when you could say those goals were actually achieved. But our first step has to be risk mitigation. I think technology will become more and more dangerous when combined with the other factors within five years.
[Demis] Hassabis [é] one of the leading scientists who understands where the world is going. Thus, more and more scientists will be convinced of what is at stake… Scientists are not strategists, but they have been affected by the turmoil of their time. And for the fact that if you want to get ahead, you have to follow certain paths that aren’t necessarily popular. Standing out and doing well has become more difficult.