U.S. Army preps for future of AI on the battlefield
StoryApril 08, 2019
As the U.S. Army launches a project to bring artificial intelligence (AI) to the battlefield, other researchers have developed tests to determine whether systems are using “explainable artificial intelligence,” which can play a crucial role in preventing undesirable decision-making.
The Army project, a $72 million AI fundamental research effort, will run for five years in a search for capabilities that can help enhance mission effectiveness that will augment soldiers, optimize operations, increase readiness, and reduce casualties.
Military use of AI isn’t a remotely new concept; in fact, the U.S. Department of Defense (DoD) began training computers to mimic basic human reasoning back in the 1960s.
Today, AI and machine learning algorithms are already integral parts of our daily lives, from enabling digital speech assistants to autonomous driving or drones.
But the big challenge is that we don’t know exactly how AI systems reach their conclusions: Exactly how “smart” are they? It’s often unclear whether the decisions made by AI are truly intelligent.
A group of researchers in Germany and Singapore – unrelated to the Army project – recently tackled this problem of a lack of “explainable AI,” enabling a glimpse of the diverse “intelligence” spectrum observed in current AI systems. These projects analyzed AI systems with novel technology that enables automated analysis and quantification.
Figure 1: The U.S. Army is preparing for AI on the battlefield. Illustration by Jhi Scott, CCDC Army Research Laboratory.
|
(Click graphic to zoom) |
“Explainable AI” is one of the most important steps toward a practical application of AI, according to Klaus-Robert Müller, professor of machine learning at TU Berlin in Germany. Researchers from TU Berlin, Fraunhofer Heinrich Hertz Institute, and Singapore University of Technology and Design developed algorithms to put any existing AI systems to a test and to derive quantitative information about them: A whole spectrum starting from naïve problem-solved behavior to cheating strategies up to highly elaborate “intelligent” strategic solutions can be observed.
This group discovered that an AI system, which won several international image-classification competitions a few years ago, pursued a strategy that can be considered naïve from a human point of view, classifying images mainly on the basis of context. Images were assigned to the category “ship” when there was a lot of water in the picture, with others classified as “train” if rails were present. Still other pictures were assigned to a category by their copyright watermark. The real task to detect the concept of ships or trains wasn’t solved by this AI system – even though it indeed classified the majority of images correctly.
They also discovered the types of faulty problem-solving strategies in state-of-the-art AI algorithms – deep neural networks – that were considered immune from such lapses. Surprisingly, these networks based their classification decision in part on artifacts that were created during the preparation of images and have nothing to do with the actual image content.
“Our automated technology is open source and available to all scientists,” Müller says. “Our work is an important first step in making AI systems more robust, explainable, and secure in the future, and more will have to follow. This is an essential prerequisite for general use of AI.”
“Explainable AI” will be crucial when applying AI to military applications. As part of the Army project, Carnegie Mellon University is leading a consortium of universities to collaborate with the Army Research Laboratory (ARL) to speed research and development of advanced algorithms, autonomy, and AI to enhance national security and defense.
The Army’s ultimate goal is to accelerate the impact of battlefield AI. “Tackling difficult science and technology challenges is rarely done alone, and there is no greater challenge or opportunity facing the Army than AI,” says Dr. Philip Perconti, director of the Army’s corporate laboratory.
The project will focus on developing robust operational AI solutions to enable autonomous processing, exploitation, and dissemination of intelligence and other critical, operational, decision-support activities. It also aims to support human-machine teaming using AI.
In support of multidomain operations, “AI is a crucial technology to enhance situational awareness and accelerate the realization of timely and actionable information that can save lives,” says Andrew Ladas, who leads ARL’s Army Artificial Innovation Institute.
Adversaries with AI capabilities will pose new threats to military platforms, including human-in-the-loop platforms, and autonomous platforms. “The changing complexity of future conflict will present never-seen-before situations wrought with noisy, incomplete, and deceptive tactics designed to defeat AI algorithms,” Ladas says. “Success in this battlefield intelligence race will be achieved by increasing AI capabilities as well as uncovering unique and effective ways to merge AI with soldier knowledge and intelligence.”