Software-induced hardware obsolescence: The big fixStory
September 07, 2015
Embedded systems in military applications - because of their long operational life and the changes to requirements experienced during use - are particularly prone to software aging, which can result in degradations in performance. This can, in turn, lead to expensive, unplanned hardware upgrades. One approach to avoid these upgrades is applying optimization to aged software.
Unlike other critical real-time embedded systems, military avionics systems have an operational life of decades, throughout which they are regularly refreshed with updates. These regular updates arise from planned changes and also from changes to operational requirements to meet evolving military demands.
The combination of changes (which cannot be predicted at the initial design) with upgrades that take place over many years inevitably lead to an increase on the demands placed by software on the underlying computing platform. This increase in demand can lead to decreased performance capability and intermittent failures due to timing overruns. One approach to avoiding this is frequent, expensive hardware upgrades.
An alternative approach relies upon automated detection of “timing optimization opportunities” within the legacy software. This timing optimization approach can also work in the legacy-software environment.
Software aging: myth or fact?
Initially, software aging might seem to be an oxymoron: Once the code is written, it doesn’t physically degrade – unlike hardware, which is subject to random physical processes that cause components’ performance to deteriorate over time.
Of course, the previous is based on the assumption that software doesn’t change. In long-lived military systems, this is likely to be an untrue assumption: Operational requirements will change, and these changes will inevitably lead to software aging.
Software aging affects lengthy projects for four main reasons:
- Over the course of very long projects, the rationale behind architectural design decisions will get lost in the mists of time.
- As more decisions get made which are not in line with the original architecture, the original “shape” of the software gets lost.
- Changing fashions in software development will pull the architecture in different directions.
- Even with the best architecture, it is impossible to anticipate in advance all of the possible changes that might be required.
The net effect of such software aging is that software performance degrades over time.
Software-induced hardware obsolescence
Over the last forty years, a default assumption has been that electronics/computing performance inevitably improves over time. This “reality” leads to periodic hardware upgrades being built into long-running programs such as military systems, with the aim of taking advantage of performance improvements.
Working against these periodic performance improvements is the software aging problem: Increased capability requirements for little gain in functionality. Software-induced hardware obsolescence occurs when the software aging causes the need for extra hardware upgrades in addition to the planned upgrades.
Pushing back the ravages of time
The alternative to unplanned upgrades is to improve software performance, which comes about through careful optimization. In the case of real-time systems, this typically focuses on worst-case performance, or the longest time it takes software to execute a given function.
In an ideal world, optimizations can arise from taking an existing architecture and refactoring it to a more efficient structure in the light of new requirements. Given the inevitable degradation in the software architecture that comes about through software aging, the extensive redevelopment of an entire system that this would require is unlikely to be an acceptable option. Instead, optimization must be a more “opportunistic” activity – identifying improvements and applying them without a strong understanding of the underlying architecture.
Optimization follows three main steps:
- Determine contribution (identify where in the code base to focus optimization efforts).
- Optimize (identify alternatives to existing code).
- Rinse and repeat (measure the improvement, if any, and continue until the job is done).
The single most important factor in deciding where to focus an optimization effort is understanding the contribution of each software component to the overall system performance.
”Contribution” here is used to mean the percentage of time spent executing a specific piece of code. It comes from two values: the longest execution time of the piece of code and the number of times it is executed.
Finding the contribution of a specific module relies on first finding the worst-case path through the code, then looking at the amount of time spent in each code sub-program on that path. (See Figure 1.)
Figure 1: Contribution to worst-case execution time (WCET) by sub-program.
(Click graphic to zoom by 1.9x)
As the graph shows, some code makes no contribution to the worst-case path, while some code makes a minor contribution; third, still other code makes a significant contribution. It is this last category that provides the best candidates for optimization.
Attempting to identify candidates for optimization through manual inspection of the code is not recommended, as it is effort-intensive and can lead to wasted optimization efforts; for example, by attempting to optimize code that falls into the “no-contribution” category. The best approach is to identify optimization candidates by measuring the execution time of the code.
Optimize, identify alternatives
Once optimization candidates have been identified, the next step is to optimize them. This activity, which is central to the overall process, relies upon the skill and experience of the engineering team.
A great source of optimizations comes from modules that are executed many times on the worst-case path. Each cycle that can be shaved off such code benefits from a multiplier effect on the overall path.
Rinse and repeat
Once optimizations have been made, it is necessary to measure execution times once more. This step will establish whether the system now meets its performance objectives.
If further improvements are still required at this stage, repeating the exercise of identifying optimization candidates may show up new places to focus optimization effort.
Two of the three steps described above require measuring the worst-case execution time of the code.
Typically, measuring execution times involves:
- Adding measurement points (also known as instrumentation) to the source code.
- Collecting measurements.
- Analyzing measurements.
For large systems, this quickly becomes a time-consuming activity. The effort required for all three of the above activities can be significantly reduced through tool support, which could be developed in-house, or via commercial tools, such as RapiTime.
Integrating such tool support into the build-test process means that the timing measurement can happen automatically during every build-test cycle. This gives designers the ability to see how the optimization activity progresses with every step, rather than waiting for the end of an optimization activity.
What about legacy systems?
Many of the systems that need to be optimized will fall into the category of “legacy systems”: systems whose age means that there is restricted support for the computing platform both in terms of software tools and in the hardware to connect to them.
A key aspect of handling such systems is flexibility in the approach taken to timing analysis. For example, it may not be possible to use modern debugging interfaces or other specific hardware interfaces. The approach to making timing measurements must therefore be capable of adapting to the facilities that are available. At the same time, the impact of any instrumentation code must be minimized, as far as is possible, to avoid running out of resources (for example, memory or CPU capacity) during the measurement activity.
Military avionics system software unavoidably “ages,” which can lead to expensive, unplanned hardware upgrades. The alternative is optimization of aged software, which can only realistically be performed through a program of measurement, optimization, and review.
Automating the measurement of software performance minimizes the effort involved, and also allows measurements to demonstrate incremental improvements to the software performance.
Dr. Andrew Coombes leads the marketing department at Rapita Systems Ltd., a company specializing in tools for on-target verification of high-integrity embedded software. For the last twenty years he has been involved in the development and commercialization of software tools for embedded, real-time applications. He received his D.Phil. in Computer Science at the University of York in the U.K.
Rapita Systems www.rapitasystems.com