GUEST BLOG: Beware the buzzword
BlogJune 20, 2024
Artificial intelligence (AI) is a real buzzword across all industries right now, but coverage across the media spectrum is not always helpful. We’re seeing mind-boggling new technical capabilities being demonstrated each week, but it’s often hard to tie these to real-world use-cases; this, coupled with hype-to-bust predictions and discussions about “artificial general intelligence” serve to further distort the narrative.
The defense industry is no exception – there are many instances where personnel could and should be kept out of harm’s way by increasing use of technology, including the use of machine learning (ML) to perform repetitive or dangerous tasks. However, linking the real operational use cases with near-term capabilities – as well as articulating the aspirational use cases and the art of the possible – can be challenging.
In the world of physics, there is a relatively small number of people trying to find a single unifying “Theory of Everything.” However, for most practical purposes, Newton’s Laws of Motion have proved perfectly adequate for three and a half centuries. Similarly, instead of expecting a single “AI” to solve all our problems or a single robot to carry out all tasks, the reality is that incremental, well-bounded component parts will gradually begin to automate pieces of the whole.
In the defense domain, there are many problems well-suited to the application of AI and ML. In particular, cognitive overload driven by increasing numbers of increasingly high-resolution sensors and augmented by nontraditional data sources can cause damaging or even fatal delays in responding to a threat. This is an area where AI/ML is demonstrably able to help by trawling through this mass of data, classifying and correlating those items of genuine importance, and communicating and presenting these to troops and commanders for assessment. We should not expect that aforementioned single AI to perform all of these tasks – in fact, small, bounded, and provable components of the whole solution will plug into existing tactical-awareness systems.
Pushing it to the edge
There are numerous challenges presented by the desire to process data in this way. Battlefield communication is limited in bandwidth and is subject to stringent security. To process such huge volumes of data necessitates inference close to the sensors, at the edge nodes on the tactical network. Deployment and updating of inferencing models to the edge nodes requires new infrastructure, as does recovery of sensor data relating to corner cases to inform the work flow of retraining new, more-capable models. Digital twins, synthetic data generation, and comprehensive simulation offline from operation are shown to have significant benefits in the creation of competent inferencing systems.
At each point in the battlefield domain, there needs to be an appropriate level of processing performance to deal with the increasing demands of applying, maintaining pace with the art of the possible that we see in other domains, such as on-road autonomous driving. The size, weight, and power (SWaP) of the computing systems will vary according to the capability of the platforms on which it will be deployed; larger platforms will be able to sustain high SWaP, and hence higher performance and greater capability, but all of the computing elements need to be able to survive the harsh environment of the battlefield.
Other defense use cases include application in logistics resupply and casualty evacuation; both of these can be extremely risky areas of operation and put personnel at significant risk of injury. Again, the AI/ML implementations will improve parts of these tasks in an evolutionary way – initially, think increasing automation of tasks, rather than a leap to full autonomy. The use of technology must serve to protect human soldiers, not to endanger them through error or unreliability.
There are many demonstrable ways in which AI/ML technology is gaining acceptance and proving to be useful without the need to wait for, or worry about, an all-encompassing artificial general intelligence agent. We won’t see a “Terminator” style robot any time soon, and we should all be grateful for that!
Simon Collins is Director of Product Management at Abaco Systems.
Abaco Systems · https://www.abaco.com/