UNIVERSITY UPDATE: Advances in technology attempt to enable safer battle conditions
StoryMarch 09, 2021
Several programs underway at Purdue University intend to develop state-of-the-art technology that will train military leaders for modern warfare and make battlefield operations more secure.
One team of Purdue innovators reports development of battlefield-simulation technology that they used to produce a virtual reality (VR) tour of the beaches that the Allied troops landed on during the D-Day operations in Normandy, France (watch the VR video from Purdue at https://www.youtube.com/watch?v=EqRTAGPz5WM&feature=youtu.be).
“We apply what we know from the field of physics and treat the virtual soldiers almost like liquids that are interacting on the battlefield,” says Sorin Adam Matei, a professor of communication and associate dean in Purdue’s College of Liberal Arts. “Military educators can use this tool to teach future leaders lessons learned from historic battles in a visually exciting way that brings them to life for the students.”
This particular team’s work – under the aegis of FORCES (4S) – Strategy, Security and Social Systems Initiative in Purdue’s College of Liberal Arts – “supports the use of social scientific research in strategy and security activities to shape long-range and global military, political and organizational decision-making for a just, stable and secure world.”
Jonathan Poggie, a professor of engineering at Purdue, says of the project: “We’re exploring a new approach to group behavior that has the potential to significantly change wargaming and crisis management. I’m enthusiastic about bringing to bear some of the techniques we’ve developed in aerodynamics and high-performance computing on military decision-making.”
Poggie, team member Robert Kirchubel, and research assistant Matthew Konkoly are also working on a battlefield simulation of the Civil War battle of Gettysburg; the trio has formed a startup company called FORCES Inc. to help commercialize the technology.
Another team from Purdue is working on advances that will lead to more secure use of artificial intelligence (AI) in unmanned aerial systems (UASs) used on the battlefield. The Purdue team, together with colleagues from Princeton University, is leading research on ways to protect the software of UASs used on the battlefield by securing their machine learning (ML) algorithms, the data the machines rely on to operate semi-autonomously on the battlefield. (Figure 1, above.)
The project, part of the Army Research Laboratory (ARL) Army Artificial Intelligence Institute (A2I2), is backed by up to $3.7 million for five years. The prototype system will be called SCRAMBLE, a somewhat-tortured acronym for “SeCure Real-time Decision-Making for the AutonoMous BattLefield.”
“The implications for insecure operation of these machine learning algorithms are very dire,” says principal investigator Saurabh Bagchi, a Purdue professor of electrical and computer engineering who holds a courtesy appointment in computer science. “If your platoon mistakes an enemy platoon for an ally, for example, then bad things happen. If your drone misidentifies a projectile coming at your base, then again, bad things happen. So, you want these machine learning algorithms to be secure from the ground up.”
SCRAMBLE is aimed at closing any hackable loopholes in three ways: First, by using what the team calls robust adversarial ML algorithms that operate with untested, partial, or manipulated data sources. Army researchers are reported to be evaluating SCRAMBLE at the ARL’s Computational and Information Sciences Directorate’s autonomous battlefield testbed.
Second, the prototype will include a set of interpretable ML algorithms aimed at increasing the warfighters’ trust of an autonomous machine while interacting with it. Prateek Mittal, an associate professor of electrical engineering and computer science at Princeton, will be leading a group focused on developing that capability. “The ability of machine learning to automatically learn from data serves as an enabler for autonomous systems, but also makes them vulnerable to adversaries in unexpected ways,” Mittal says. “For example, malicious agents can insert bogus or corrupted information into the stream of data that an artificial intelligence system is using to learn, thereby compromising security. Our goal is to design trustworthy machine learning systems that are resilient to such threats.”
Bagchi and Mung Chiang, Purdue’s John A. Edwardson Dean of the College of Engineering and Roscoe H. George Distinguished Professor of Electrical and Computer Engineering, will lead work on the third strategy, that of the secure, distributed execution of the various ML algorithms on multiple platforms during autonomous operation.
“This team is uniquely positioned to develop secure machine learning algorithms and test them on a large scale,” Bagchi says. “We are excited at the prospect of close cooperation with a large team of Army Research Laboratory collaborators as we bring our vision to reality.”