Space missions gaining vision, precision, autonomy
StoryJune 01, 2018
Space missions requiring higher levels of autonomy and greater precision in navigation are becoming increasingly prevalent, and vision-based sensing systems are acting as a technology enabler for these application types. With high-precision relative navigation, these systems are playing a critical role in rendezvous and proximity operations (RPO); entry, descent, and landing (EDL); and robotic exploration of the solar system. With the private sector driving more ambitious mission requirements through technological advances such as autonomous satellite servicing, the future of these applications is looking bright.
Despite the increasing popularity of vision-based sensing systems, developing them has been costly and resource-intensive. The algorithms used to translate a raw image into data that can be used for vehicle control are developed by a niche group of engineers with specialized area expertise. Verification and validation of these algorithms can involve complex physical testbeds featuring robots moving on tracks toward physical scale models of approach targets such as spacecraft and asteroids.
Once the algorithms are developed and validated by test, implementation is complicated by the need to optimize the available onboard processing resources, which are often limited by the availability of computing hardware that can survive the hostile radiation environment of space. As part of this optimization, it is common for portions of the algorithms to be distributed between FPGAs [field-programmable gate arrays] and computer processors; however, this split can increase both design complexity and the number of engineering specializations required.
Change is brewing: The ongoing private space race, which is disrupting many space-related technologies, is also driving down the cost of developing relative navigation capabilities. Competitions like the Google Lunar XPRIZE have motivated new companies to develop extraterrestrial landing technology at substantially lower cost than was previously possible as companies are using higher-level languages such as MATLAB and Simulink for algorithm development. This approach enables their algorithm design engineers to focus on developing the high-level application rather than spending time reinventing lower-level image-processing routines, which are now available off the shelf.
These higher-level languages also enable rapid prototyping of candidate algorithms, which can be integrated with existing guidance, navigation, and controls models for early system-level validation. Using these languages with model-based design also enables software and hardware development engineers to automatically generate code for embedded deployment on both processors and FPGAs and enables them to create test benches for system verification.
Developing autonomy using machine learning
While workflows in the space industry have seen incremental changes, other industries – most notably automotive – have completely transformed their approach by using recent advances in machine learning to help develop their autonomous systems. Taking advantage of large amounts of collected data, they are using deep learning to train systems to detect and recognize objects to enable autonomous operations.
Can machine-learning techniques be applied to relative navigation in space systems to overcome cost and resource challenges while simultaneously improving the capabilities of the system? A fundamental challenge to this approach is the traditional conservatism of the industry. The space industry has historically favored reliability and testability over performance. Current development processes and best practices expect the developed algorithms to be simple enough to be reviewed by humans, as well as to exhibit deterministic behavior, in which a given input to the algorithm always produces the same, known output.
Neither of these hold true for deep-learning networks, where the algorithms are essentially impossible for humans to understand and often produce outputs that are difficult to fully predict. Moreover, even if the expectations were to shift, the amount of training data available from space is highly limited compared with the data available from the world’s millions of miles of roadways.
As the current trend towards increased mission complexity continues, spacecraft will increasingly explore unknown terrain and encounter unpredictable situations, often at distances from Earth that make real-time ground control impractical. However, most relative navigation systems being developed today need a well-known target. This leaves mission planners with two options: Either keep the mission simple enough that relative navigation is not needed, or add mapping the target to the mission objectives. The former option may not satisfy the mission objectives, whereas the latter adds considerable complexity.
What if the constraints on deterministic behavior are relaxed, and mission planners on the ground can define the spacecraft goals at a higher level? For example, by allowing a spacecraft to choose its own landing spot on an asteroid, bounded by mission objectives? Mission planners might, for instance, specify that the spot be clear of identified hazards (such as craters and boulders), and that it contain a desired substance of commercial or scientific interest (such as water). Machine-learning algorithms deployed onboard the spacecraft would use vision-based sensors to autonomously identify the best landing site meeting these constraints, while onboard guidance algorithms would compute the trajectory in real time. This approach could provide a substantial advantage to the mission in both time and cost.
First, we govern simple logic
It is likely that in the next several years the industry will discover a use for applications such as this one, and others, that do not require fully predetermined spacecraft behavior. The machine-learning techniques enabling the applications will likely first be used to govern high-level logic with benign consequences, such as deciding when to initiate a simple task like selecting a scientific feature of interest to image or sample. As confidence in this approach increases, machine learning plus vision sensing can also be applied to more critical problems, such as autonomous docking or landing.
Although training data for machine learning from space is generally lacking, significant data from Earth, the moon, and Mars is already available for machine-learning applications. Also, it is likely that sophisticated scene generators already in use for algorithm verification and validation purposes – such as PANGU [Planet and Asteroid Natural scene Generation Utility), a computer-graphics utility funded by the European Space Agency – could be used to provide additional training imagery for deep learning. For satellites, existing images of satellite features taken during ground processing can likewise be complemented with artificial scene generation. The satellite can then be taught to recognize a feature of interest and plan a path to it while avoiding obstacles such as radiators and solar arrays. (Figure 1.)
Figure 1: Existing images and scene generators can be used to, for example, teach a satellite to plan a path to a feature of interest while avoiding obstacles. Image courtesy MathWorks.
Space-rated GPUs and then …
Looking slightly further into the future, it is only a matter of time before space-rated graphics processing units (GPUs) become a reality, radically improving the processing power available on a spacecraft. When that happens, it is conceivable that an autonomous spacecraft could take the next evolutionary step: Continuously learn from its environment using deep-learning techniques and then apply that learning to fulfill its mission.
Today, the automotive industry is leading the way in deploying deep-learning technology, with computer vision as the primary application. Cars are using electro-optically sensed data for object detection, classification, and tracking to build a scene of their surroundings. This ability enables a wide range of automation levels, from advanced driver assistance systems (ADAS) to fully autonomous driving. The space industry has largely played the role of an interested spectator, suggesting that the automotive industry is accepting more risk than what space can tolerate. However, the status quo is likely to be challenged soon.
The space industry is just starting to adopt the advances in vision-based sensing, pioneered by automotive and other industries, to develop increasingly autonomous spacecraft. In this first phase, higher-level languages and model-based design are improving design efficiency and affordability of electro-optical systems. The next phase will deploy machine-learning algorithms to selective, low-risk space missions, forcing a fundamental shift in the way the industry defines requirements and verifies algorithms that would allow for the inclusion of non-deterministic software. This, in turn, will ultimately enable the final stage: spacecraft teaching themselves how to explore previously uncharted territory.
Ossi Saarela is an aerospace engineer with almost two decades of experience in the space industry. Early in his career he served as a flight controller for the International Space Station; he later found his passion in spacecraft autonomy design. He has designed supervisory logic for satellites and worked on an electro-optical sensing system for rendezvous and proximity operations. He is currently the Space Segment Manager at MathWorks, helping space programs around the world reach their goals using the latest engineering tools and practices.
MathWorks www.mathworks.com