AI and military RF systems
StoryNovember 15, 2018
Advances in artificial intelligence (AI) are enabling significant leaps in science and technology, including the fields of digital signal processing (DSP) and radio frequency (RF) systems. Methods nominally labeled as "AI" have been applied to radio systems for decades, but always with the goal of optimizing the control plane of a hand-engineered system (e.g., "smart radios" or "cognitive radios"). Using a class of AI known as deep learning, we are now able to learn entirely new systems by processing sample data which can provide greater sensitivity, better performance, and reduced power consumption and processing requirements as compared to traditional approaches. The effect of this breakthrough methodology will be significant, and as with many new technologies, we will first see its use and impact in military systems.
The accelerating complexity of wireless design has resulted in difficult trade-offs and large development expenses. Some of the complexities of RF system design are innate to radio, such as hardware impairments and channel effects. Another source of complexity is simply the breadth of the degrees of freedom used by radios. Communications and RF parameters – such as antennas, channels, bands, beams, codes, and bandwidths – represent degrees of freedom that must be controlled by the radio in either static or dynamic modalities. Recent advances in semiconductor technology have made wider bandwidths of spectrum accessible with fewer parts and less power, but these require more computation if real-time processing is important. Furthermore, as wireless protocols grow more complex, spectrum environments become more contested and electronic warfare (EW) grows in sophistication, the baseband processing required by military radios becomes more complex and specialized.
Taken together, the impairments, degrees of freedom, real-time requirements, and hostile environment represent an optimization problem that quickly becomes intractable. Indeed, fully optimizing RF systems with this level of complexity has never been practical. Instead, designers have relied on simplified closed-form models that don’t accurately capture real-world effects and have fallen back on piecemeal optimization wherein individual components are optimized but full end-to-end optimization is limited.
In the last few years, there have been significant advances in AI, especially in a class of machine-learning techniques known as deep learning (DL). Where human designers have toiled at great effort to hand-engineer solutions to difficult problems, DL provides a methodology that enables solutions directly trained on large sets of complex problem-specific data.
Artificial intelligence and radio
To understand how AI can address the design complexity of RF systems, it’s helpful to first have a high-level understanding of the recent advances that have driven the explosion of AI-based systems. The term “AI” has been in use for decades, and broadly encompasses techniques for problem solving where a machine is making decisions to find a solution. “Machine learning” refers to a type of AI where a machine is being trained with data to solve a particular problem. “Deep learning,” then, is then a class of ML capable of “feature learning,” a process whereby the machine determines what aspects of the data will be used for its decision-making as opposed to a human designer specifying salient characteristics.
To explain this concept, we’ll work through a computer-vision example of finding faces in an image: Prior to the advent of DL, facial recognition used hand-engineered features and algorithms. A designer would write algorithms that first attempted to locate eyes in an image, and once those were found, the nose, the mouth, the jawline and ears, and then the outline of the entire face. The algorithm was hand-designed, based on years of research into what techniques were effective.
The DL approach is dramatically different. Continuing the facial-recognition example, an engineer simply feeds a dataset of images containing faces to a DL model and tells it where the faces are in the training data. Through the training process, the machine learns to recognize faces in images without a human specifying the features or algorithms by which to make decisions. Furthermore, where a human might struggle algorithmically with differences and complexity in the data (e.g., if a face is turned away from the camera and both eyes aren’t visible), the machine benefits from it; these variables can be used to strengthen the robustness of the model’s performance. Automated feature learning is one of DL’s most important benefits and makes it a powerful tool for tackling some of the toughest problems in RF system design.
DL also enables end-to-end learning, which refers to training a model that jointly optimizes both ends of an information flow, treating everything in between as a unified system. For example, a model can jointly learn an encoder and decoder for a radio transmitter and receiver that optimizes over the end-to-end system (e.g., the digital-to-analog converter [DAC], RF components, antennas, the wireless channel itself, and the receiver network and analog-to-digital converter [ADC]). The “end-to-end system” might also be something less broad, like the synchronization and reception chain for a specific protocol. Indeed, this is often the usage pattern when integrating DL into existing systems where there is less flexibility in the processing flow.
A key advantage of end-to-end learning is that instead of attempting to optimize a system in piecemeal fashion by individually tuning each component and then stitching them together, DL is able to treat the entire system as an end-to-end function and learn optimal solutions over the combined system holistically.
Military RF with AI
Deep learning-based approaches can not only considerably improve the performance of existing missions, but also enable entirely new capabilities and operational concepts. Spectrum sensing and signal classification were some of the first radio applications to benefit dramatically from DL.
Whereas previous automatic modulation classification (AMC) and spectrum monitoring approaches required labor-intensive efforts to hand-engineer feature extraction and selection that often took teams of engineers months to design and deploy, a DL-based system can train for new signal types on the order of hours.
Furthermore, performing signal detection and classification using a trained deep neural network takes a few milliseconds. When compared to the iterative and algorithmic signal search, detection, and classification using traditional methodologies, this can represent several orders of magnitude in performance improvement. These gains also translate to reduced power consumption and computational requirements, and the trained models typically provide at least twice the sensitivity of existing approaches.
DeepSig has commercialized DL-based RF sensing technology in its OmniSIG Sensor software product. The automated feature learning provided by DL approaches enables the OmniSIG sensor to recognize new signal types after being trained on just a few seconds worth of signal capture. (Figure 1.)
Figure 1: The OmniSIG sensor performing detection & classification in the cellular band on streaming data from a general-purpose software-defined radio.
For learned communications systems, including end-to-end learning that facilitates training directly over the physical layer, DeepSig’s OmniPHY software enables users to learn communications systems optimized for difficult channel conditions, hostile spectrum environments, and limited hardware performance. Examples include non-line-of-sight communications, antijam capabilities, multi-user systems in contested environments, and mitigating the effects of hardware distortion.
One of the advantages of learned communications systems is easy optimization for different missions. Many times, what users care most about are things like throughput and latency, but the individual user might instead prioritize operational distance of the link, power consumption, or even signature and probability of detection or intercept. Moreover, as in all cases with machine learning, the more you know about the operational environment, the more effective your trained solution can be.
Combining DL-based sensing and active radio waveforms makes possible entirely new classes of adaptive waveforms and EW capable of coping with today’s contested spectrum environments. When training DL-based systems, more processing performance is desired, but once trained, the model can be readily deployed into low-SWaP [size, weight, and power] embedded systems, such as edge sensors and tactical radios.
Hardware architectures for AI in radio
There are typically two types of commercial off-the-shelf (COTS) cognitive radio (CR) systems that can enable AI radio systems for defense: 1) compact, deployed systems in the field using AI to determine actionable intelligence in real time that leverage the typical CR architecture of a field-programmable gate array (FPGA) and general-purpose processor (GPP), sometimes with the addition of a compact graphics-processing unit (GPU) module, or 2) modular, scalable, more compute-intensive systems typically consisting of CRs coupled to high-end servers with powerful GPUs that do offline processing.
For low-SWaP systems, the hardware-processing efficiency and low-latency performance of the FPGA, coupled with the programmability of a GPP, makes a lot of sense. While the FPGA may be harder to program, it is the key enabler to achieving low SWaP in real-time systems.
For larger compute-intensive systems, it’s important to have a hardware architecture that scales and can heterogeneously leverage best-in-class processors. These architectures typically comprise FPGAs for baseband processing, GPPs for control, and GPUs for AI processing. GPUs offer a nice blend of being able to process massive amounts of data while being relatively easy to program. While a downside of GPUs is their long data pipelines, which lead to higher transfer times, this has historically not been an issue as most such systems are not ultra-low-latency.
Table 1 provides a high-level overview of the advantages (in green) and disadvantages (in red) of each processor type. Of course, there are ranges of devices within each category that trade off power at the expense of performance, which should be weighed as part of design analysis.
Table 1: AI processor options and trade-offs.
An example of a low-SWaP CR is this embedded radio combining a low power GPU with a compact CR. (Figure 2.)
An example of a larger compute-intensive system is the DARPA Colosseum testbed being used in the Spectrum Collabor-ation Challenge (Figure 3). This system includes 128 two-channel CRs with on-board FPGAs, ATCA blades with multiple FPGAs for data aggregation, as well as high-end servers with powerful GPUs for AI processing.
Figure 3: center
Military HW systems
While these systems are good examples of CRs being deployed for commercial and defense AI applications today, this certainly prompts the question of how AI considerations may impact future hardware architectures and designs. From a processor perspective, the trends toward heterogeneous processor topologies will continue. An example is the ACAP [Adaptive Compute Acceleration Platform] from Xilinx, which includes tiles of vector-processing engines well-suited for machine learning inference amongst the FPGA logic and other cores, such as multiple ARM cores and in some cases even accelerators for forward error correction and ADCs/DACs. The ACAP is intended to be an AI heterogeneous processor, and with ADCs and DACs could essentially be an AI system-on-chip (SoC) coupled directly to the RF front end. Of course, other chip vendors are not standing still, so one can expect really interesting and powerful device options for AI in the future, with some vying for complete SoC solutions, others taking more of a family of best-in-class processors approach, and some combining both strategies. Similarly, one can expect COTS SDR/CR vendors to develop new low-SWaP CRs using the SoCs and large, modular, scalable systems using multiple best-in-class processors which can be scaled up to support massive systems with incredible AI and ML capabilities.
The possibilities are …
AI and the technologies it enables have made great leaps in recent years and is poised to radically change military RF systems, applications, and CONOPS [concept of operations]. It’s now possible to train a custom modem optimized for an operational environment and specialized hardware in a matter of hours, forward-deploy it to low-SWaP embedded systems, and then repeat. Deep learning’s ability to learn models directly from data provides a new methodology that, in some ways, commoditizes physical-layer and signal-processing design while simultaneously making it possible to reach levels of optimization that were previously intractable.
Ben Hilburn is the Director of Engineering at DeepSig Inc., which is commercializing the fundamental research behind deep learning applied to wireless communications and signal processing. He also runs GNU Radio, the most widely used open-source signal processing toolkit in the world. He works heavily in the open-source community across government, industry, and academia.
Manuel Uhm ([email protected]) is the Director of Marketing at Ettus Research/National Instruments (NI). Manuel is also the Chief Marketing Officer of the Wireless Innovation Forum (formerly the SDR Forum), which is responsible for the SCA (Software Communications Architecture) standard for military radios, and CBRS (Citizens Broadband Radio Service) for spectrum-sharing between naval radar and commercial broadband services.
DeepSig, Inc. www.deepsig.io
Ettus Research/National Instruments www.ni.com