When will artificial general intelligence be ready for smart weapon and sensor systems?
StoryJune 17, 2020
By David Sherwood and Terry Higbee
Modern sensors and weapon systems, particularly those associated with autonomous vehicles, will be required to make smart, trustworthy decisions in milliseconds. But is today’s artificial intelligence (AI) technology up to the task? Current AI solutions can perform quite well at detecting patterns in data, but fall far short of understanding the meaning and relevance of those patterns in a way that resembles how humans understand information. Furthermore, today’s technology is computationally slow, brittle, and opaque; when the response is not right it can be catastrophically wrong and subsequently cannot explain how it arrived at the answer. Although AI systems are still quite far from human-like reasoning, recent developments in artificial general intelligence (AGI) are making enormous strides in the right direction. This reality will revolutionize sensors, weapon systems, and other defense embedded systems.
Consider this hypothetical scenario: an autonomous sensor platform is patrolling a region of space not far from adversary platforms and gathering intelligence. (Figure 1.) Suddenly, the autonomous sensor platform is hit by a shock wave. Now some of its sensors are malfunctioning and its sensors are indicating a critically low battery level. If it does not return to base right away, the entire platform and its stored intelligence may be lost or fall into enemy hands.
[Figure 1: Autonomous platforms will become increasingly dependent upon AGI.]
Perhaps, the platform reasons, the malfunctioning sensors are the result of being struck by lightning. The platform must now decide whether to continue gathering intelligence according to the mission directive or return to base to protect the platform and the intelligence it has already collected. But there is nothing in its explicit programming to direct the platform how to respond to a situation exactly like this.
Fortunately, this is an intelligent platform that was designed to deal with unanticipated situations. It knows that if it stays the course, it might gather highly valuable intelligence. It also knows the platform itself has high value and should not be put at risk without high probability of collecting unusually high-value data. The platform reasons that to break this deadlock, it must better assess if the low-battery warning is real or a sensor malfunction. To resolve this dilemma, it turns on as many electronic devices as possible to deliberately increase the drain on the battery and loiters for a minute while closely monitoring the battery level, but the battery level doesn’t budge. After a minute the platform therefore concludes that the battery sensor is most likely malfunctioning, and it continues to gather the much-needed intelligence.
Autonomous platforms need strong artificial intelligence
As autonomous platforms proliferate and the military environment becomes ever more complex and lethal, these platforms will have to negotiate with situations that their planners could not have anticipated and must be relied on to make quick decisions that are reasonable and explainable. Despite the amazing recent advances in AI, it is not clear that AI as we know it today is on a path towards providing machine intelligence that has the flexibility exhibited routinely by humans. What we need is artificial general intelligence (AGI).
What is AGI?
AGI is a relatively new term that recaptures what was originally (back in the 1950s) meant by artificial intelligence. Despite the enormous growth of computing power and increasing sophistication of algorithms in the past 60-plus years, very limited progress has been made towards computers that can understand, think, and reason in a way that resembles human information processing.
Although there is no universal agreement on the definition or measurement of AGI, today’s AI does not begin to understand, reason, or think. Today’s AI is often referred to as “weak” AI as opposed to the “strong” capability of AGI. There are signs, however, that recent developments in the design of knowledge representation for artificial neural networks (ANNs) may be opening the door to true AGI technology.
AGI Systems will invariably be powered by knowledge
There is no escaping the basic fact that knowledge is power. AGI systems, regardless of their exact design, will need knowledge bases filled with mission-related information to succeed. (Figure 2.) We see this in our lives too: Children begin learning and filling their knowledge bases the day they are born. They learn by seeing, hearing, and touching the environment, together with years of learning from parents, friends, and teachers. Similarly, AGI systems will be powered by deep knowledge bases.
[Figure 2: AGI systems will be powered by knowledge.]
Knowledge representation is key to AGI systems
Decades ago, there were extensive efforts to develop knowledge representations for AI systems. Today we are seeing innovative new techniques for knowledge representation that are now being applied to representing knowledge in an ANN. In this case, the information encoded using knowledge representation becomes analogous to a memory “engram” (a term used to describe a unit of cognitive information) in a person’s brain.
The essence of human intelligence is “generalization” and “abstraction.” New design strategies for knowledge representation embrace that view and design the “engrams” to promote strong generalization, not unlike the way that a communications engineer designs codes to support error correction.
Key technical features of AGI systems
AGI systems will employ knowledge generalization to provide capabilities including:
- Inferencing: AGI systems will have a robust capability to retrieve information from its knowledge base, which is conceptually related to the current state, and then decide how best to respond to the current situation based on extensive knowledge of similar situations and those outcomes.
- Mechanisms to “focus” knowledge retrieval: AGI systems will focus their retrieval and processing logic on those attributes most relevant to the priorities of the current situation. For example, deciding how to maximize the value of the intelligence being collected is not the best way to use its inferencing capabilities when the platform is on fire.
- General model-based reasoning: Many embedded systems today possess brilliant but weak intelligence (in the AI sense of “weak”). These systems are often a mix of smart subsystems that talk through well-defined APIs. In contrast, an AGI system does not have to be spoon-fed its inputs through such APIs but can handle unstructured data from a webpage, a document, or a new subsystem and make the connections between that information and its internal models through conceptual generalization. This is an entirely different and much more powerful way of thinking about “variable binding” in computer systems.
State of the art in AGI
It’s difficult to know the exact state of the art today because most organizations in the AI community have decided to focus on the much more tractable problem of weak AI. But in recent years some organizations have shown signs of success. We estimate that if human-level intelligence is the top rung on a ladder with, for instance, 10 rungs, today’s top AGI technology is probably on about the second or third rung.
Humans still have an enormous advantage over AGI systems and will maintain that advantage for many years:
- Humans not only possess much larger knowledge bases than anything we can construct today, but human knowledge is a more comprehensive description of the physical world.
- Humans also have more comprehensive capabilities for “data cleansing” and dealing with inconsistent and unreliable data.
AGI systems, on the other hand, may have the following advantages:
- Much faster processing time (especially when using dedicated hardware accelerator chips).
- Retrieving data from the knowledge base does not corrupt the data as it does with humans. (Humans modify engrams as they recall them.)
Research and development conducted over many years has demonstrated that a systematic approach to knowledge representation for ANNs provides a solid foundation for AGI. Now that progress in AGI is beginning to surge, can we expect to see funding made available to further expand progress in this key technology?
David Sherwood, the founder of Cognitive Science and Solutions, is an EW systems engineer with decades of experience in digital signal processing and neural networks. Readers may reach him at [email protected].
Dr. Terry Higbee, the cofounder and CTO of Cognitive Science and Solutions, has a Ph.D. from Stanford University and more than five decades of experience in computer architecture, signal processing, algorithm development, and cybersecurity. Email the author at [email protected].
Cognitive Science & Solutions
www.cogscisol.com