10 GbE in net-centric warfare: Why commercial network cards can't drive the application
StoryOctober 27, 2009
10 GbE provides a standards-based fat pipe to move data. However, real-time applications present unique challenges that must be addressed at the outset.
The world of technology has flipped 180 degrees: No longer is the best technology developed for the military and then later commercialized. Now, the defense department is playing catch-up to off-the-shelf technology. These days, civilians are accustomed to a profoundly high level of networking connectivity, enabled largely by Ethernet and Internet Protocol (IP). The military wants the same.
The appeal of Ethernet-based networking in the digitized battlefield is clear. And with the advent of 10 GbE, Ethernet is no longer limited to the command and control fabric in systems. It can also function as a “fat pipe” that reaches all the way down to high-bandwidth sensors. So, it’s not surprising that 10 GbE is being designed into more and more defense applications, both in the main network and as the pipe to and from sensors or effectors.
But using 10 GbE is not without problems.
The fire hose crisis
Ever try to drink from a fire hose? It’s not a matter of digesting or processing the water. You can’t even ingest it. A processor faced with a 10 GbE pipe has the same problem.
It takes about 1 GHz of CPU processing per Gb of Ethernet traffic. CPUs fall quite short of bandwidth trying to run Ethernet protocol at 10 Gbps.
This problem also applies to commercial servers, but adopting commercial-world solutions for net-centric defense applications doesn’t work. Let’s look at why commercial network cards don’t fit the bill.
The problem of handling bursts
High-performance real-time sensor applications, such as ELINT analysis systems found on platforms like Boeing’s P-8A Poseidon (Figure 1), often need to sample in high-speed (hundreds of MSps) bursts lasting at least a few milliseconds. For a single channel, this leads to an incoming large data blast of at least 10 MB to 30 MB. On a 10 GbE link, this corresponds to receiving 1,000 to 20,000 back-to-back packets at line rate (depending on the MTU size employed).
Figure 1: High-performance real-time sensor applications, such as ELINT analysis systems found on platforms like Boeing’s P-8A Poseidon (pictured), often need to sample in high-speed (hundreds of MSps) bursts lasting a few milliseconds. Boeing photo by Ed Turner
If the receiving interface cannot absorb all this data into system memory, data is dropped. In a commercial server environment, dropped packets are resent. But in sensor applications, there is no time and/or no hardware facility to resend the data. Typical commercial 10 GbE interface cards, designed and cost-optimized for an environment where re-transmission of data is permitted and line rate bursts are rare, simply cannot address this problem.
Beyond protocol offload
In many real-time applications, offloading the transport protocol provides only part of the solution. For instance, COMINT or ELINT direction finding, network surveillance, and intercept are examples of applications that collect sensor data to synthesize multidimensional models of the environment. Such applications rely on the fusion of the sensor data. Bringing the data together requires accurate and precise time stamping of the individual data streams. Outgoing data in Electronic Countermeasures (ECM) or simulation systems must similarly be precisely time gated and synchronized to other events.
Precise time stamping and gating can only be performed through a deterministic interface. The CPU, vulnerable to software interrupt latencies and inconsistent access to buses, cannot provide the necessary time-stamp precision. Similarly, commercial Ethernet cards, with or without offload, get traffic from place to place. Time stamping is typically not a part of their functionality.
Time stamping is an example of additional special functionality required by these real-time applications. Another kind of problem occurs when offloading the protocol still results in data rates that are too high to be managed after they leave the interface. Take a multicamera GigE Vision-based situational awareness system that runs on top of standard UDP protocol or an intrusion prevention system guarding a network from viruses, Trojans, and other cyber threats coming from the WAN where the contents of packets are inspected at line rate. Both applications execute on payload data arriving at a 10 Gbps rate. So, offloading only the transport protocol, as a commercial offload solution would do, still leaves the CPU struggling to process data at 10 Gbps.
Making a mountain out of a molehill
Real-time digitized signals are destined for signal processing such as low or band pass filtering or error correction. These algorithms correct or compensate for small runs of errors, such as the error in a few consecutive data points. When faced with a large run of consecutive errors (such as when a large group of samples is missing), the algorithms break down.
This is particularly relevant for Ethernet data that transmits in packets with checksums and Cyclic Redundancy Check (CRC) fields that check for errors in the packets. Per the protocol standard, when errors are detected, even if they were caused by an error in a single data point, the entire packet is discarded. This means that as many as 9,000 consecutive bytes go missing, subsequently choking the signal-processing algorithm (Figure 2). If those few errors had instead been delivered to the signal-processing algorithm, they would not have caused a problem. So, the standard protocol stack behavior can make a mountain out of a molehill in a real-time signal processing system.
Figure 2: When errors are detected, the entire packet is discarded, resulting in as many as 9,000 consecutive bytes going missing and subsequently choking the signal-processing algorithm.
The real-time 10 GbE alternative
These problems can be circumvented with an intelligent 10 GbE interface. FPGA-based solutions are capable of the line-rate performance and flexible enough to run algorithms optimized for application requirements. Unlike CPUs with their sequential manipulation paradigms, data flowing into an FPGA cascades like a waterfall over massive amounts of parallel configured logic. In today’s large FPGAs, the data flows over pipelines of hundreds of thousands of logic cells that can effectively be running at a few hundred megahertz. For the I/O, modern FPGAs are equipped with high-speed transceivers running at up to 11 Gbps. This allows them to interface to high-speed serializers or directly into the high-speed optical or copper communications interfaces. Using clever designs, even modern medium-sized FPGAs – containing, say, a hundred thousand cells – can fit multiple channels of real-time 10 GbE interfaces along with higher-layer application functionality.
A real-time 10 GbE system should include:
- Sufficient memory to accommodate the extended duration, line-rate bursts
- A time stamp and synchronization interface to deterministically and precisely stamp packets entering or exiting
These systems should additionally comprise algorithms optimized for real-time requirements that might include:
- A transport layer protocol offload tailored to the application, bus interface, and processor
- A process that alleviates CPU burden by offloading intensive application processing operations or inspecting and dropping packets that are uninteresting before they ever get to the CPU
- Modifying the standard transport protocol behavior to tag but not drop the packets received with checksum errors
No question, civilian COTS technologies – from the ubiquitous Internet to the gaming console in your living room – are finding their way into the digitized battlefield and changing both expectations and the nature and capabilities of net-centric warfare.
When it comes to using 10 GbE in these high-performance real-time applications, it is not enough to simply grab the mass-market technologies off the shelf. Instead, while understanding and respecting the unique requirements of net-centric warfare applications, we need to tailor the implementation of the standardized 10 GbE interface to meet these real-time needs.
Rob Kraft is VP of Marketing at AdvancedIO Systems Inc. He has more than 13 years of experience in systems engineering and business roles in the embedded real-time computing industry. Prior to joining AdvancedIO, he worked at Spectrum Signal Processing and AlliedSignal Aerospace. Rob has an MASc in Electrical Engineering from the University of Toronto. He can be contacted at [email protected].
AdvancedIO Systems Inc. 604-331-1600 • www.advancedio.com
Featured Companies
AdvancedIO Systems, Inc.
Minneapolis, MN 55435