AI now playing a big role on the battlefield
StoryJune 20, 2024
The prospect of artificial intelligence (AI) promises to remove soldiers from the battlefield, increase the effectiveness of platforms and weapons, and enable better decision-making, but for years these benefits have been theoretical rather than practical.
In 2024 there are signs that artificial intelligence (AI) is finally being used on the battlefield after years of planning, and the U.S. military and industry are vying to get in on the AI push. Recent military operations in current hot spots including Ukraine and Gaza show just how AI is being used in battlefield contexts and how global militaries might start using this technology.
For example, a Ukrainian startup called Swarmer conducted a field test near Kyiv during which personnel deployed a swarm of drones that were coordinated through AI to perform a specific mission: identifying and destroying hidden targets without human pilot intervention for navigation, according to a Politico report by Gian Volpicelli, Veronika Melkozerova, and Laura Kayali, titled “Our Oppenheimer moment”: “In Ukraine, the robot wars have already begun.” The process began with reconnaissance drones identifying the best flight paths autonomously, followed by bombers that executed the attack. A final pass using an uncrewed aerial system (UAS) confirmed the destruction of the targets.
In another example, a Washington Post article titled “Israel offers a glimpse into the terrifying world of military AI” by Ishaan Tahroor reports that Israel is using an AI-based program named “Lavender”: The system reportedly marks individuals Israel has associated with Hamas for airstrikes. One report indicates Israel has used the tool to identify as many as 37,000 people as targets, with almost no human oversight.
The defense industry has explored a wide range of uses for AI, from crunching data to coordinating drones.
AI analytics is one area that has seen a boom in activity. As data needs grow, so does the demand for technology that can sort through this data and provide actionable insights. The U.S. military has explored the type of AI analytics technology that can quickly sort through data collected on the battlefield to provide insights to a combatant commander.
Drone swarms have been another focus of AI tech. The U.S. Department of Defense (DoD) announced the Replicator Initiative last year, which aims to remove barriers and quickly field innovative capabilities to warfighters to handle specific operational challenges, and AI-powered drone swarms have been a big part of that effort.
AI may be important not just for executing swarm operations, but also for defending against them in the field. AI will be necessary to coordinate “large numbers of autonomous agents in dynamic and contested environments,” says Timothy Stewart, director of business development at Aitech (Chatsworth, California). The company is working on AI algorithms and machine-learning techniques to help military forces detect, analyze, and respond to swarm threats.
“These types of attacks pose one of the more significant risks to ground vehicles and in-field operations, where vehicles and personnel are left exposed and less able to maneuver across certain terrains,” Stewart says.
In the future, uncrewed systems powered by AI will be the norm, which will fundamentally change battlefield strategies, he argues.
“Robotics enhance force protection by reducing the reliance on human operators in high-risk environments, mitigating potential casualties, and preserving operational continuity in the face of adversarial swarm attacks,” Stewart says. “These platforms can operate autonomously or collaboratively with crewed assets, forming a networked ecosystem capable of adapting and responding to swarms in real time.”
In an experiment involving manned aircraft, the U.S. Defense Advanced Research Projects Agency (DARPA) conducted the first-ever in-air tests of AI algorithms piloting an F-16 fighter jet autonomously in April 2024, after years of preliminaries. The AI did more than pilot the plane – it engaged in direct combat scenarios against a human-piloted counterpart. The test was part of DARPA’s Air Combat Evolution (ACE) program, which is dedicated to integrating AI into tactical aviation and establishing a framework for ethical and trusted human-machine teaming.
AI also enables faster deployment of capabilities to the field. Doug Beck, director of the Defense Innovation Unit, said at the XPONENTIAL 2024 exhibition in April 2024 that the DoD is using AI and machine learning (ML) to speed up software improvements to uncrewed vehicles across the services from a matter of months to a few days.
Officials ask industry to tackle AI challenges
Officials are openly asking the industry to focus more on AI in their development efforts. Melissa Johnson, the acquisition executive at U.S. Special Operations Command (USSOCOM, headquartered in Tampa, Florida), told SOF Week attendees in Tampa last month that AI is a growing part of the organization, saying that the industry needs to develop AI-powered solutions to solve today’s difficult data challenges.
“We need to be thinking about data governance, we need to be thinking about how that data is connected – all those things that are not exactly exciting to care about, but are vitally important to making sure that we get a capability downrange,” Johnson told the assembled audience at SOF Week.
AI can help with the integration of systems, which is a major focus of USSOCOM, Johnson said, adding that collaborative and autonomous systems would also be key to the service going forward.
“Being able to use much smaller things and getting them into a contested environment … and to be able to be survivable and lethal and provide that awareness is where we’re taking this,” she said. “And it’s not just one component. It’s not just one domain. But it’s crossing those domains.
“We are definitely investing in artificial intelligence and using it in ways for integrated deterrence, but also we tend to use it in ways for crisis response and counterterrorism,” she added.
AI is “really the name of the game,” she stated.
USSOCOM Commander Bryan P. Fenton also acknowledged AI’s growing influence in his keynote at SOF Week, urging industry to take the challenge seriously.
“Whatever you knew about artificial intelligence six months ago, forget about it,” stated USSOCOM Commander Bryan P. Fenton in his SOF Week address. “It’s just moving at such a rate of speed ... It’s pretty frightening.” (Figure 1.)
[Figure 1 | USSOCOM Commander Bryan P. Fenton speaks to attendees at SOF Week in May 2024. Image courtesy MES.]
Practical ways AI can help
The services are proactive about leveraging AI across multiple domains and exploring new companies and new commercial AI solutions. For example, at many defense trade shows more and more vendors are showcasing products and services powered by AI.
At SOF Week, engineers at Primer AI showcased a tool that takes a massive amount of publicly available data from sources around the web and combines it with private data collected by an organization’s own internal network to create comprehensive situational awareness. At another SOF Week booth, Exovera demonstrated its AI analytical platform intended to help identify patterns to unlock a “wealth of new intelligence,” as described by Exovera CEO Bob Sogegian.
Defense engineering firm BlueHalo showcased an autonomous maritime vehicle under development with Kraken using AI/ML technology. At XPONENTIAL 2024, BlueHalo was also on hand to demonstrate its AI/ML-powered swarming UAS, which essentially act as a truck or vehicle for any sensor an operator wants to put on them. (Figure 2.)
[Figure 2 | BlueHalo’s HaloSwarm, an autonomous uncrewed vehicle with AI/ML swarm logic capabilities, showcased at XPONENTIAL 2024. Image courtesy MES.]
Speeding decision-making
AI enables the collection and analysis of large amounts of data to help soldiers make faster (and better) tactical decisions, says Shan Morgan, Vice President of Sales at Elma Electronic (Fremont, California).
“Our products are used extensively in support of [applications] that involve intensive data acquisition and processing, often requiring the use of AI in decision-making,” says Ram Rajan, Vice President of Engineering at Elma Electronic. Elma is focused on designing to the points laid out in the Sensor Open Systems Architecture (SOSA) Technical Standard to develop integrated, AI-ready platforms to support C5ISR [command, control, communications, computers, cyber, intelligence, surveillance, and reconnaissance], electronic warfare (EW), and electro-optical/infrared (EO/IR) applications in DoD programs. (Figure 3.)
Practical applications for AI include real-time object tracking for soldiers with advanced cameras and night vision, training troops in battle scenarios, seamless language translation, wearable tech with health monitoring, and AI-driven robots and UASs, Morgan says.
[Figure 3 | Elma Electronic’s JetSys-5320, its NVIDIA Jetson TX2- based rugged AI-enabled computer.]
AI and lethal force
Despite the benefits of AI there are those who are concerned about ethics – particularly the idea of using AI for lethal force. That consideration doesn’t just apply for armed drones – even data-analytics platforms that use AI could have their conclusions used to make lethal force decisions. Israel’s Lavender AI targeting technology has been criticized for not having enough human oversight, raising the concern that innocent civilians could be identified as targets by unchecked AI software.
All defense companies developing some form of AI will argue that their system only enhances human decision-making rather than replacing it. But it might be wise to have guardrails in place, says Chris Ciufo, chief technology officer at General Micro Systems (Rancho Cucamonga, California).
Ciufo says he believes there are a few potential pitfalls with AI. One point he makes is that soldiers may become overly dependent on AI, leaning on it more and more for decisions and losing some of their own critical-thinking skills in the process. Another issue with leaning on AI is outsourcing life-or-death decisions to computers, and the industry should be careful about how quickly it hands that power to AI – or perhaps not do it at all.
“If you are in a battle situation and AI is presenting you a literal piece of the battlefield, if it’s identifying threats that it sees and if it’s predicting – based upon history either from the current battle or hundreds of previous battles fought across the world – that enemies are likely to come through that valley over there ... you might say, ‘all right, let’s just blow it up,’” he says. “Or the computer will automatically blow it up.”
In battlefield situations like this, AI is focused on finding patterns. It may consider irrelevant data like the color of a building or whether a car is parked out front and use it as an indication it belongs to a terrorist group, Ciufo argues. As a result, AI could put innocent people at risk by having them erroneously tagged by unchecked AI systems as targets.
AI can help with efficiency on the battlefield, but if militaries can’t resist the urge to lean on it for decisions, it could lead to unpleasant consequences in the future.
“Just because we can, should we? Where do we stop? Can you trust the AI, and how do you verify?” Ciufo asks.