Autonomy

One of the most fascinating ways to look at a technical problem is to probe its full complexity. Traditional approaches typically tried to break down a problem area into smaller, more easily understandable parts. There is much merit to this approach, if the problem is what’s considered “linear” or that it can not only be broken down, but that it can be “reassembled” without missing any critical components. Of course, how many times have you taken a piece of technology apart to repair a part inside and reassembled it only to find one or two bolts that didn’t find their way back together? Still worked? Good piece of tech. Still didn’t work (even after you re-disassembled)? Not-so-good piece of tech.

Now imagine a more complicated system. Let’s say people working together in an air traffic facility: Each persons’ personal interaction with colleagues, each piece of technology’s interaction between other technology, a person interacting with their technology, a person not interacting with technology, etc. That’s a very complicated system. Trying to break down each part of this system into smaller more understandable parts may have some merit, but can one “reassemble” the parts with any real confidence?

Let’s consider another complicated system. Imagine automated cars driving in an urban environment. Pretty simple to put together some automation systems in the vehicles. Pretty simple to put together a set of rules for interacting (“don’t hit another vehicle”). But having more than two automated cars? A much more complicated system.
And so, rather than focusing on the more traditional approach of problem decomposition, let's consider better understanding the problems and challenges in light of the full depth and breadth of their problem space by embracing nonlinear dynamic concepts, under the rubric of "Chaos Theory," which provides a very robust environment to examine solutions.

Understanding a problem in its context is key to its solution.

Automation, for example, is one such area. Traditional automation, where a set of conditions is provided to a machine for processing, and once processed exits with a solution. Simple. Which affords the ability to ensure repetition of the processing (certifying its processing or ensuring that a predictable outcome always results from a predictable set of inputs), and confidence in its results. This kind of automation has decades of experience and best practices. Approaches like design point (known inputs and known outputs), and testing for known-unknown conditions are robust in the systems engineering discipline. But it has its limitations in the kind of applications for which it is beneficial. For example, the ability to accommodate unknown-unknown conditions is not feasible with simple automation.

Automation of automated systems, is the next incarnation. It has the ability to do automation of a number of automated systems and processes. In this case, the need for assurance is still in place, but the order of the problem to ensure predictable outcomes is orders of magnitude more complicated (complex, perhaps) than the simple automated process. The promise is that automating these automated systems provides some ability to accommodate unknown-unknown conditions. And, thus, the promise of "self-driving vehicles" is being pursued.

Autonomy is a popular word these days. Unfortunately, it is being applied to automation of automated systems. Autonomy, truly, is a fully learning system that adapts to its environment, and does so by a contextual machine learning (using neuromorphic machines to overcome the limitations of traditional Turing machines), emergence (where the system understands the changing environment rather than relying on contingency plans and traditional "what-if" scenarios), and developing a new paradigm of trust-assurance (that is fundamentally different from IV&V and certification). True autonomy is not here now. It may be some day when the benefit of truly autonomous systems exceeds the risks associated with them.

 

Herb Schlickenmaier