Can AI be trusted in warfare?

Earlier this year House and Senate Committees and Subcommittees heard a good bit of alarming testimony about artificial intelligence and China. Alexandr Wang, the CEO of Scale AI, testified that, “The Chinese Communist Party deeply understands the potential for AI to disrupt warfare. … AI is China’s Apollo project.”

Michèle Flournoy, who served as Under-Secretary of Defense in the Obama administration, said, “The Chinese have something called civil-military fusion, which basically says that the government can demand the cooperation of any company, any academic institution, any scientist, in support of its military. We have a very different approach: We have a truly private sector, and individuals and scientists and academics and companies get to choose whether they want to contribute to national security.”

But if we’re going to understand the future of artificial intelligence in national security, it may help to take a look back, to when AI was proving its potential on a couple of board games.

In 1997 Garry Kasparov, widely regarded as one of the greatest chess masters of all time, accepted a challenge from IBM’s Deep Blue. He won that first game, but that was it.

The ancient game of Go is hugely popular in Asia, even more complicated than chess. One young South Korean, Lee Sedol, was considered perhaps the greatest Go player in the world. The award-winning documentary, “Alpha Go,” captured the media frenzy in 2016 before the first of five challenge matches between Sedol and a specially-designed AI program. Sedol remarked, “I believe that human intuition is still too advanced for AI to have caught up.”

Sedol and human intuition were crushed, four games to one – a staggering, headline-making event only a few years ago, yet already little more than a footnote in the evolution of artificial intelligence.

Which left poker – heads up, no limit, Texas hold ’em. People get to lie in poker. Decisions have to be made on imperfect information, which is precisely what attracted the attention of Tuomas Sandholm, a professor of computer science at Carnegie Mellon. “Almost all problems in the real world are imperfect information games,” he said, “in the sense that the other players know things that I don’t know, and I know things that the other players don’t know.”

In 2017, the team at Carnegie Mellon issued a challenge to four professional poker players, including Jason Les, who recalled, “We really wanted to fight for humanity and show that our beloved game of poker was so complex that humans still had an edge over AI.”

Les said the AI program played very much unlike a human: “An AI can know that it’s going to play a certain hand 13% of the time and have a much more complex strategy than a human mind is able to have.”

“But you were representing humanity, and you lost!” said Koppel.

“Well, you’re rubbing salt in the wound!” Les laughed. “Yes, we wanted to demonstrate that this game was so complex, that AI had not quite gotten there yet. Losing to the AI made me realize that this technology had gotten very advanced.”

Sandholm said, “The techniques that we developed were not really techniques for ‘solving’ poker per se. They were techniques for solving imperfect information games more generally.”

Koppel said, “Basically, poker is a civilized – relatively civilized – form of warfare?”

“That is a good way to put it,” said Les. “We’re not out there with guns, tanks and planes, but we’re out there with chips and cards, and we’re waging battle there. It’s still, at the end of the day, a strategy game.”

Having sharpened their skills on poker, Professor Sandholm’s AI company, Strategy Robot, now works as a Pentagon contractor, filling in the gaps of imperfect information. “We are trying to help the nation and our allies have a superior AI capability for this type of decision-making,” he said.

Koppel said, “So, I’m assuming that that kind of information is being funneled to the Ukrainian military?”

“I can’t comment on that,” Sandholm replied.

“But whatever you have, you give it to the Pentagon, what the Pentagon does with it is none of your business?”

“Well, it is our business.  I just can’t talk about it!”

“OK! But is it fair to say that the same principles that are applied to AI playing poker are now being applied to a war that is being fought?”

“The current war, I can’t comment,” said Sandholm. “But for military strategy operations and tactics in general, yes.”

Artificial intelligence in warfighting is already a foregone conclusion. For the moment, though, U.S. policy insists that there always be human oversight. And there’s a new office at the Pentagon, under the cautious guidance of Dr. Craig Martell, to ensure that the policy is implemented. The chief digital and AI office, said Martell, has a pretty unique role: “What we’re gonna do is provide guardrails and policies that say, ‘If you’re going to acquire AI, here’s what it’s like to do it responsibly. If you’re going to deploy AI, here’s how you have to evaluate it.”

What that boils down to is a question of confidence, when the wrong decision will cost lives. Martell said, “Imagine an AI told a commander, ‘Do action A,’ and the commander through all of his or her training would’ve said, ‘Do action B.’ What should that commander do? Should the commander listen to that machine, or should the commander listen to his or her training and intuition?

“If the DOD is good at one thing, we are very good at training. Training, training, training, training,” Martell said. “And through all of that training, if the commander got used to trusting that machine, then the commander might trust the machine. If the commander got used to not trusting the machine, then the commander wouldn’t.”

If that sounds like a gigantic waffle, it is; but it also has the additional virtue of containing more than a grain of truth. Jason Les, the dethroned poker champion, speaks from personal experience: “I could take you back to the beginning of this AI challenge. AI told me how to play a hand a certain way, I would have believed from my experience that what the AI was telling me, that this is not good advice, and my conventional wisdom and my understanding of strategy was the most optimal. However, over time, playing against the AI for thousands of hands, finally that confidence builds up, and eventually it’s trusted for these higher stakes decisions.”

Sandholm said, “The thing that keeps me up at night is really what if in these military settings we fall behind (for example, China) in our decision-making AI technology?”

Is that happening? “I think China has caught up in AI with the U.S. overall, and we’re kind of on par right now,” Sandhold said. “I think in military AI, China has much better pickup in actually adopting AI in the military.”

Michèle Flournoy said, “I don’t think we know exactly how fast they’re moving. I think we cannot afford to take our foot off the gas. When you think about it, you know, a China scenario – if China’s moving against Taiwan – if you wait until they’re actually attacking Taiwan to have that sense of urgency and to respond, it’s gonna be over before the first new piece of whatever you think you need actually arrives. So, to me that means that we haven’t fully absorbed the urgency of doing this.”

Which is precisely what makes this next statement (and it does accurately reflect U.S. policy) difficult to accept. According to Flournoy, “We have got to proceed with development, but with a very strong ethical and normative framework in place that ensures that the only AI we actually deploy for military purposes is safe, is secure, is responsible, is explainable, is trustworthy. But this notion that AI’s gonna be making large campaign-level decisions in warfare, I don’t see that given our values as a democracy, given the norms that we’ve established already.”

Koppel asked, “And yet, when we come up against the competition, and we come to believe that our competitors are not being bound by the same ethical guidelines, what do you do?”

“If an adversary uses a weapon, you know, that creates massive civilian casualties, or things that are equivalent to war crimes, we don’t say, ‘OK, well, we have to do that, too.’ [Instead], we call them out and we try to sanction them.”

“I’m not sure I accept that,” Koppel said. “There have simply been too many times, going back to 1945 and the bombings of Hiroshima and Nagasaki, when we clearly were not bound by those kinds of strictures.”

“That’s fair, that’s fair.”

“And when we feel that an adversary is gaining advantages over us, I’m not altogether confident that we would remain bound by those kind of strictures?”

“Yeah, my hope would be that we wouldn’t abandon the same principles as they did,” Flournoy replied. “Because at the end of the day, how we fight says a lot about who we are.”

Precisely the argument made last summer when the Biden administration sent a shipment of cluster bombs – banned by more than 120 countries – to Ukraine.

The issue before us, though, is human oversight of all military AI programs. According to Sandholm, “The mistakes that I see in life, almost all of them are made by humans. People think that, you know, there should be human oversight of AI, which I actually do believe. There should be human oversight of AI. But there should also be AI oversight of humans. So, the oversight should be in both directions. And that balance of oversight is gonna shift over time.”

There is, when you think about it, a pattern that different artificial intelligence programs established in the games it won over the very best players in the world – in poker, in Go, and in chess. Hardly anyone believed it could happen until, of course, it did.

As Sandholm explains, “Humans believe that they’re better at decision-making than they really are.”

For more info:

Story produced by Dustin Stephens. Editor: Carol Ross.

More on artificial intelligence from “CBS News Sunday Morning”:

Leave a Comment