Videocentric ISR missions push rugged computing to the limits
StoryAugust 01, 2017
Intelligence, surveillance, and reconnaissance (ISR) missions put extreme performance demands on data servers, as they struggle to contend with large amounts of video and data coming in while hewing to strict size, weight, and power (SWaP) constraints. Even as SWaP dominates the conversation, though, designers are beginning to realize that performance eclipses nearly everything else.
ISR missions are bringing in all kinds of actionable data, but not without pushing the high-power/high-performance servers to the limit. “Our military has the tremendous capability to gather the world’s best intelligence, but the hardware and software applications that really need to do this collection of information in real time are really pushing COTS [commercial off-the-shelf] products to the limits,” says Jason Wade, president of ZMicro in San Diego.
SWaP: The P stands for performance
The commercial technology revolution is undoubtedly affecting military technology, and the focus is definitely “SWaP, SWaP, SWaP. How do you reduce size, weight, and power?” Wade asks. “What we’re seeing is that the SWaP acronym might still be valid but the definition has completely changed. While there’s still a focus on size and weight, we’re not seeing such a focus on power anymore; what we are seeing is a strong push towards performance. Our customers are really, really, starting to push the boundaries with technology and where it’s going, therefore we need to have the highest-performance systems in the field. This is really driven primarily by the need to gather some deep intelligence out in the field.”
Performance means that the server needs to be able to handle video processing, data processing, storage, even live video streaming, all of which needs to be in a small package. “We are seeing more and more small-form-factor embedded systems for applications like on aircraft or Humvees, ground vehicles, and UAVs [unmanned aerial vehicles], where space is an issue,” says Aneesh Kothari, marketing manager at Systel in Sugar Land, Texas. An example of the small form factor is Systel’s EB700 (Figure 1). “It’s a rugged high-performance small-form-factor embedded system that provides video processing, compute, encoding, storage, and networking capabilities in a single system,” Kothari says.
Figure 1: Systel’s EB7001 is rugged small-form-factor embedded video capture system for intelligence, surveillance, and reconnaissance (ISR) applications. Photo courtesy of Systel.
Serving up multiple live video streams
With increased demand for more “eyes in the sky” to gather intelligence, “video is playing a much more vital role and will continue to do so going forward,” Kothari says. “What we really see is this push towards everything to be very videocentric. This leads to managing increased data volume, bandwidth issues, and the challenge of being able to accurately and consistently stream live video feeds from various platforms in the battlefield to command and control headquarters to then be able to make real-time decisions.”
This push for a videocentric world brings latency issues. “It does no one any good if you have these video feeds streaming in and you’re not seeing the video till a few seconds after the fact,” Kothari says. “Therefore, you really have to minimize that latency, which is hard at high bandwidths and high resolution. “
In order to deliver that high resolution and capability in real time, companies are taking a deep dive into the design process. “When they’re developing this software, they’re not looking at, ‘How do we keep this power down?’ They’re not looking at the hardware limitations, what they’re really looking at is the limitations of technology that’s out there,” Wade explains. In response to this need for power, “they’re using high-end gamer GPUs [graphics processing units], the latest Intel CPUs [central processing units], the latest multicore processors.”
There is a “significant interest in general-purpose processing and virtualization,” says Jim Shaw, executive vice president at Crystal Group in Cedar Rapids, Iowa. The reason for this increase is because “the platforms that are in demand continue to push the edge for CUDA cores or CPU cores. Much of the core cycles being expended focus on analyzing large amounts of data or creating virtual machines to spin off processes for control or communication.” (Figure 2).
Figure 2: Crystal Groups rugged embedded computer RE1312 features 6th-generation i7/Broadwell-DE, Xeon-D CPU technology. Photo courtesy of Crystal Group.
The live video stream is not just in one channel: “We have some airborne ISR customers and what they’re doing with their software is they’re pulling in multiple video streams,” Wade says. “They’re fusing this video. They’re fusing data sources. They’re georegistering data. They’re encoding, they’re decoding, and then they’re starting the full exploitation. There’s so much data that’s getting processed and it’s such a priority to get this data for good intelligence that the software developers are now starting to push the performance boundaries.”
Leveraging standards
The warfighter – looking to gain an edge by streaming video and processing data in real time – is increasingly finding that open standards can ease the way. “Increased display resolutions are driving the video processors to move more and more data quickly,” says Steve Motter, vice president of business development at display provider IEE in Van Nuys, California. “The open standards are driving video protocols at several levels. Within the devices, there are newer internal video standards, such as MIPI and eDP, that directly link the silicon processor to the LCD row/column driver circuits.”
Engineers are leveraging everything at their disposal to take this technology to the next level. “Between embedded computer products, we are seeing high-speed serial digital interfaces, such as SMPTE-292, become more common replacements for legacy video interfaces,” Motter adds.
They are also looking to fiber optics that enable better performance: “In the aerospace community, ARINC-818 is offering high-speed serial on either copper or fiber optic, allowing for lightweight long cable runs,” Motter adds. “Although not an open standard, GigE Vision may make inroads for a switched, packet-based video transport. ARINC-661 is an example of a mechanism to manipulate pre-ertified graphic display elements located in a display’s local library by a remote user application.”
GPUs and CPUs are also popular: “More and more we are seeing customers wanting to integrate GPUs and higher-end CPUs within a closed environment, which bring thermal concerns to the forefront,” Kothari says.
Ultimately, data is at the center of everything. “The latest Intel Xeon Skylake architectures are demonstrating exceptional advancements in state-of-the-art computing,” Shaw adds. “With these advances, however, comes the reality of increased thermal challenges and packaging difficulties.”
Thermal challenges magnified
Designing in the latest technology means that engineers are pushing the thermal threshold in systems. For example, designers at ZMicro are starting to “load in multicore CPUs ... I think we’re now selling 16-core Xeons dual-socketed,” Wade says.
What does that mean for the engineer? It means 300 watts of CPU power to contend with. “Then we’re putting in these high-end gamer GPUs, which adds another 250 watts. Now with just CPU and GPUs, we’re pushing 550 watts of processing. It’s a serious engineering challenge to figure out how to keep those computers and those servers cool, but at the end of the day, that’s our job,” Wade states.
Powerful CPUs and GPUs are certainly posing a challenge: “From a core and rackmounted server perspective, the power dissipation per core continues to drop; however, the core count per CPU is increasing at a significant rate,” Shaw says. “What used to be an eight- to 12-core system dissipating 80 watts is being replaced with silicon that has 24 cores, and is dumping 150 watts of heat into a socket. With a dual-socket motherboard, this equates to 300 watts that needs to be dissipated in 1U space.”
“As computing becomes more dense, our engineers must think outside the box to find ways to address thermal concerns,” Kothari notes. This also means working with the user during the design process in order to “marry the customer requirements with power budgeting, package size, and environmental conditions to determine the optimal thermal solution.”
Servers are carrying quite a load. Mitigating the heat necessitates different methods that include thermal modeling. “There’s heatsinking the heat pipes. There’s making sure that there’s plenty of exhaust and evacuation,” Wade says. “At the end of the day, we consider it always a high-risk item, and so we track that through our design process. It’s just a combination of prudent thermal modeling in design. It’s verification testing for the various components after the prototype is built. And then it’s the qualification to make sure at the end of the day that the system does perform.”
As industry experts try to satisfy users’ requirements, “it’s a design challenge,” Wade states simply. “It’s an engineering risk; we need to understand what tried-and-true techniques are used to mitigate the thermals.”
The next phase of development
Department of Defense (DoD) officials have been preaching their SWaP mantra for years, so much so that SWaP-optimized embedded servers are the future for the military, Kothari says. “Modern warfighter applications will demand single LRUs [line-replaceable units] to replace multiple legacy systems, moving from standalone systems to all-in-one embedded solutions.”
SWaP requirements continue to drive the evolution of today’s military-use rugged computing, but technology doesn’t stand still and trying to figure out what’s next for rugged computing is an exciting challenge. “Where we’re at right now? We’re kind of in that next phase of technology where the next major technology disruption is self-driving cars,” Wade says.
While it may be difficult to project into the future, the industry should look at “the impact of cellphones, tablets, and IoT [Internet of Things] on the market,” Shaw points out. This reality will enable the introduction of “very low-cost, high-density storage combined with advanced processing architectures that will play a role in the next five to 10 years. Additionally, these ‘microappliances’ could be connected to everything important to a particular user via connections integrated on a single piece of silicon.”
These advancements in technology, while the military user may not initially relate to them, “are really going to accelerate the implementation of machine learning and artificial intelligence into industries across the world, and of course the military is going to be all over this technology and this capability,” Wade asserts.
For the next phase to begin, rugged computers will have to “support the high-end computation that’s needed to support emerging technology,” Wade adds. “There will be systems that are not going to be the traditional CPU with PCI card expansion and storage. It’s going to be systems that are more architected around heterogeneous computing, so you’ll see FPGAs [field-programmable gate arrays], you’ll see GPUs, you’ll see CPUs all working in concert, load-sharing, balancing the different computing requirements to take advantage of the high-end parallel processing that’s needed.”
Sidebar 1