Military Embedded Systems

Is the military ready for AI to help make decisions on the battlefield?

Story

March 08, 2019

Sally Cole

Senior Editor

Military Embedded Systems

A study found that less-than-competent users of artificial intelligence (AI) on the battlefield – presumably those who would need that AI boost most of all – are actually the least likely to be swayed by rational justifications, even with AI thought to be infallible.

Think of it akin to: If you thought you knew the way to a destination, would you still use GPS? U.S. Army scientists set out to determine whether or not AI, which can seem opaque or be frustrating for some people to use, will be helpful for making decisions on the battlefield.

This project – run by Army scientists and University of California at Santa Barbara researchers – attempted to test the theory that many people trust their own abilities far more than a computer, which can affect their judgment when making decisions under pressure.

“The U.S. Army continues to push the modernization of its forces, with notable efforts like the development of smartphone-based software for real-time information delivery such as the Android Tactical Assault Kit (ATAK), and the allocation of significant funding toward researching new AI and machine-learning methods to assist command-and-control personnel,” says Dr. James Schaffer, scientist for RDECOM’s Army Research Laboratory, the Army’s corporate research laboratory (ARL), at ARL West in Playa Vista, California.

Despite these advances, there’s still a significant gap in basic knowledge about the use of AI and whether AI will help or hinder military decision-making processes.

The researchers constructed an abstract experiment similar to the iterated Prisoner’s Dilemma, a game in which players must choose to cooperate with or defect against their co-players in every round in order to control all relevant factors. An online version of the game was developed, in which players obtain points by making good decisions in each round.

In the game, AI is used to generate advice in each round, which appears alongside the game interface; the “advisor” makes a suggestion about which decision the player should make. For the game, the researchers designed an AI that always recommends the optimal course of action. But similarly to real life, players must actually opt to access the AI’s advice manually – just as a user must manually switch on GPS – and the players know that they are free to accept or ignore the AI’s suggestions.

The researchers also presented different versions of this AI: Some were deliberately inaccurate, some required game information to be entered manually, and some justified their suggestions with rational arguments. All variations of these AI treatments were tested so that interaction effects between AI configurations could be explored.

People were invited to play the game online and researchers collected a profile of each player and monitored their behavior. The researchers asked each player about their familiarity with the game, while also measuring their true competency. A test was also given halfway through playing to measure awareness of gameplay elements.

“What we discovered might trouble some advocates of AI,” Schaffer says. “Two-thirds of the human decisions disagreed with the AI, regardless of the number of errors in the suggestions.”

The greater the player estimated their familiarity with the game beforehand, the less they used AI – an effect that was still observed when controlling for the AI’s accuracy. This result suggests that improving a system’s accuracy won’t increase system adoption within this population.

“This might be a harmless outcome if these players were really doing better, but they were in fact performing significantly worse than their humbler peers who reported knowing less about the game beforehand,” Schaffer explains. “When the AI attempted to justify its suggestions to players who reported high familiarity with the game, reduced awareness of gameplay elements was observed – a symptom of overtrusting and complacency.”

Despite these findings, a corresponding increase in agreement with AI suggestions wasn’t observed. This result presents a problem for system designers: incompetent users need AI the most, but are the least likely to be swayed by rational justifications.

Ironically, incompetent users were also the most likely to say that they trusted the AI, which was studied through a post-game questionnaire. “This contrasts sharply with their observed neglect of the AI’s suggestions, demonstrating that people aren’t always honest or may not always be aware of their own behavior,” Schaffer says.

This work shows that while AI may enhance military decisions on the battlefield in the future, ongoing issues regarding its usability remain, despite continued advances in AI accuracy, robustness, and speed. “Rational arguments have been demonstrated to be ineffective on some people, so designers may need to be more creative designing interfaces for these ­systems,” Schaffer notes.