Local NPR for the Cape, Coast & Islands 90.1 91.1 94.3
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

When it comes to the dangers of AI, surveillance poses more risk than anything

ARI SHAPIRO, HOST:

At least 1 billion surveillance cameras are spying on the world right now according to one estimate, and more than half of them are in China, though the country has less than a fifth of the world's population. To sort through data from those hundreds of millions of cameras, the Chinese government is enlisting the help of artificial intelligence - technology that can identify faces, voices, even the way someone walks. Paul Scharre investigated the country's surveillance systems for his new book "Four Battlegrounds: Power In The Age Of Artificial Intelligence." He writes that AI has the power to reshape the entire landscape of human governance and warfare from enabling the spread of authoritarianism to influencing how wars start and end. Paul Scharre, welcome back to ALL THINGS CONSIDERED.

PAUL SCHARRE: Thanks. Thanks for having me.

SHAPIRO: You say early on in the book that the dangers of AI are not the dangers science fiction warned us about. We don't have to be afraid of killer robots rebelling against their human makers, at least not yet. So what is the immediate threat?

SCHARRE: Well, the immediate threat isn't that the AI systems turn on us. It's what people may do with these AI technologies. And we can see, as you mentioned, in China the development of an AI-enabled model of repression that China is pioneering, particularly in Xinjiang, where they're using AI to help in the repression of the ethnic Uyghurs that live there but also nationwide. And then China's beginning to export its model of AI-enabled repression globally.

SHAPIRO: When you spent time in China looking at the country's use of AI, what surprised you?

SCHARRE: So many things. One of them is - you know, it's one thing to hear about 500 million surveillance cameras deployed in China, but it's an entirely different thing to walk down the streets of a major Chinese city and see these cameras everywhere - at light poles, at intersections, halfway down the block - sometimes to the point of absurdity. I would sit and count how many cameras there were on a given light pole. And so the surveillance is very ubiquitous. And it's not trying to be hidden because, of course, the Chinese Communist Party wants to subtly remind people that, in fact, they are being watched.

SHAPIRO: Yeah.

SCHARRE: And one of the things that AI enables the government to do is to then put electronic eyeballs behind all these cameras because how do you monitor 500 million cameras? Well, you need AI to do it.

SHAPIRO: As China exports this technology and these standards and norms to countries like Zimbabwe and Venezuela and others, how does that ripple out?

SCHARRE: Well, one of the things that's really troubling, of course, is China's export of its surveillance technology and the social software, the norms and standards behind how it's used. China's technology for policing and surveillance has been sold to over 80 countries worldwide. But China's also been engaging in training with countries on things like information management and cyberspace laws and norms, exporting its model for how AI technology can be used for censorship and surveillance. Over 30 countries have been engaged in some of these training sessions. And in many cases, we've seen that, following Chinese engagement, there have been laws that other countries have passed that show China's model.

SHAPIRO: Is it accurate to view this as a kind of yin yang where, on the one hand, China is spreading this AI-powered autocratic philosophy and, on the other hand, the U.S. and Western Europe are spreading something else?

SCHARRE: Well, one of the hard parts about this problem is that there should be - we would like to see pushback from democratic countries, the United States and others, about how AI is used so that there is a competing model that's consistent with democratic values. But that doesn't exist yet. There's, of course, quite a bit of pushback here in the United States and in Europe against the use of facial recognition technology, particularly by law enforcement. Certainly, democratic governments are not doing what the Chinese government is doing.

But one of the things that's difficult within democratic societies is because power is so much more decentralized, the process of coming up with a model for AI governance is much messier. It's slower because there's a give-and-take between the government and local and state and federal authorities and civil society and the media and grassroots movements of citizens. And so it's taking longer for there to be a model coming out of democratic countries, and that's a problem 'cause there's a vacuum in, really, the ability to push back against what China is doing.

SHAPIRO: I'd love for you to share an anecdote that you begin the book with where you describe aircraft dogfight trials where there are two simulated pilots. One is human. One is AI. And what happened?

SCHARRE: Sure. So the U.S. military did a project to build an AI dogfighting agent - so an AI agent that can control an aircraft in a simulator, although they are now working on transporting this to real world aircraft. And it could engage in dogfighting against a human. And in the final trials, the winning agent among a number of different companies that submitted their AIs in the competition went head-to-head against an experienced Air Force pilot and hands-down crushed the human pilot. Human pilot didn't get a single shot off against the AI. And one of the things that was quite remarkable was that the AI actually learned on its own new techniques for dogfighting that humans actually can't do. Executing head-to-head gunshots when - there's a split second when the aircraft are circling each other and the aircraft are nose-to-nose, and there's really no good way for human to get a shot off.

SHAPIRO: It's just too dangerous.

SCHARRE: It's too dangerous. The aircraft are racing at each other at high speeds. And a risk of collision - and there's only a split second where you actually could make an engagement. But the AI learned that it could do that. It could do that with superhuman precision, and it was very lethal and effective.

SHAPIRO: Paul, I know you said at the beginning we don't have to worry about AI rebelling against humans and overpowering us, but what you're saying right now is not reassuring.

SCHARRE: Well, I do think there's a risk that over time, in warfare, for example, as more and more AI systems are adopted by militaries, we begin to see a transition to a period of warfare in which humans effectively have to hand over control to machines. Some Chinese scholars have talked about the idea of a singularity on the battlefield where the pace of combat action eclipses humans' ability to respond, and militaries effectively have to turn over the keys to AI systems just to remain competitive. And that is a troubling prospect.

SHAPIRO: To end on a more positive note, of course there's a chance that AI could start or inflame a war. But you also write that AI could help avoid war. How would that work?

SCHARRE: Well, one of the things that AI might be potentially very valuable for is increasing transparency among states and making it easier for states to process information and to have more accurate information about, for example, political decision-making among other countries or the military balance of power. A great example of this came out of the run-up to Russia's invasion in Ukraine, where the United States government released lots of information about Russia's military buildup, helping to shine a light on the fact that Russia was poised to invade Ukraine. Now, they didn't stop Russia's invasion, but what it allowed the U.S. to do was to convince allies that this invasion was coming, it was likely very real, and to help build up political and diplomatic support for Ukraine. And that's a great example where AI can be used to help process information from drones and satellites. And so that use of AI to get more information, accurate information about the world could be one way in which AI could be very stabilizing internationally.

SHAPIRO: Paul Scharre's latest book is "Four Battlegrounds: Power In The Age Of Artificial Intelligence." Thanks a lot.

SCHARRE: Thank you. Thanks for having me.

(SOUNDBITE OF ROYKSOPP'S "RISING URGE") Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Christopher Intagliata is an editor at All Things Considered, where he writes news and edits interviews with politicians, musicians, restaurant owners, scientists and many of the other voices heard on the air.
Ari Shapiro has been one of the hosts of All Things Considered, NPR's award-winning afternoon newsmagazine, since 2015. During his first two years on the program, listenership to All Things Considered grew at an unprecedented rate, with more people tuning in during a typical quarter-hour than any other program on the radio.