mockup_platonite_2-1

The world is on the slippery slope towards handing serious decisions to machines: autonomous cars, war drones, robots and security surveillance systems.  The driving forces are digital technology and confidence in AI and the wide-spread belief that the behavior of intelligent machines can be predetermined by human decisions.  Are we really ready to put our privacy, our security or even our lives into the hands of machines?  And if so, what kind of machines?

Life may be mostly a routine affair.  In fact, over centuries great efforts have been spent to let all of our activities be governed by standards, rules and regularity.  Every now and then, however, normal operations are disrupted by events without a precedent, by emergencies that are not in the books and have never happened before in that particular form, yet need immediate critical action.  What do we humans do to cope with emergencies?  We go back to first principles and distantly related precedence.  We imagine the consequences of alternate options and relate them to our own goals and preferences, to ethics and law.  And then act accordingly.

The Mechanism of Intelligence

This ability to relate concrete situations to abstract goals is the essence of intelligence.  It lets us apply our basic repertoire of intentions and preferences to cope with novel situations.  The sciences that study human and animal behavior – psychology, anthropology, ethology – all share this view and illustrate it with a tremendous wealth of particular studies.

Present-day AI, however, in all its algorithmic or neural or Bayesian or machine learning or whatever forms, is unable to establish the necessary umbilical cord between the details of a scene that relate to an emerging problem and a repertoire of abstract goals and principles.  If you are ready to identify this repertoire with emotions, then it is plain that machines should have emotions.  And so far, they don’t.  This is the elephant in the room of AI.

There is only one conclusion to be drawn from this: hand over critical decisions to a machine only if it has emotions, personality and character (of the right kind, of course!).  If your autonomous car is unable to solidly relate the traffic scene to behavioral options and to weigh those options in the light of good and bad in our human sense, then don’t let loose of the steering wheel!

Classical AI had it right

Classical AI was certainly on the right track when basing much of its work on the concept of applying schemas (the frames of M. Minsky, the scripts of R. Schank, etc.)  to concrete situations to make sense of them.  Why didn’t this approach come to fruition? What went wrong in the early 80s? What is needed to revive it?

First, classical AI was stuck with toy applications because it didn’t have natural environment perception, that is, the ability to link raw sensory signals to abstract representations.  We still don’t have computer vision in this sense (or we would be able to invert computer graphics), nor do we have auditory scene analysis.

Second, classical AI believed in manual construction instead of learning: it relied on human-designed algorithms and manual entry of common knowledge.  The efficiency with which infants absorb the structure of their environment is unmatched to this day.

Third, classical AI worked with graphs of coarse nodes and links where a generic, fine-grained data format for knowledge representation is needed.  It believed in symbols and rejected neurons.  In the case symbols vs. neurons the ruling isn’t out yet. You cannot live without rich detail (pro neural population, con symbol) and you cannot live without the equivalent of symbolic operations (pro symbol, con neural population).

The three issues will have to be solved in one blow, of course.  Efficient learning from sparse examples needs perception, the extraction of the abstract essence of scenes.  Perception, on the other hand, needs learning, as the tremendous ambiguity of the poor sensory input can only be resolved with reference to memory patterns that were created with the help of thorough multi-modal examination.  And this boot-strapping process of perceiving to learn – learning to perceive can only work if there is a universal data format able to absorb and express the structure of the world.

This issue, a universal data structure for mental content combining the strengths of symbols and neurons may well be the key question standing between us and true intelligence in silico.

On to true Autonomy in the Machine

An autonomous machine is, of course, an oxymoron.  By definition machines are made to serve as dependable tools in human hands without surprises (unlike the capricious elevator in Hitchhiker’s Guide to the Universe) and, well, autonomy.  So maybe we should get used to speaking of electronic organisms instead.  If we want them to be intelligent we will have to give them autonomy.  If we want their autonomy to serve our purposes, we have to describe these in abstract terms, such as law and ethics do – that is, we will have to endow our electronic servants with emotions and character as well as the mechanisms to exert these in situations as they arise.

There is no fundamental barrier to goal-oriented electronic organisms.  

Maybe we only have to put certain prejudices out of the way.

mockup_platonite_2-1

The world is on the slippery slope towards handing serious decisions to machines: autonomous cars, war drones, robots and security surveillance systems.  The driving forces are digital technology and confidence in AI and the wide-spread belief that the behavior of intelligent machines can be predetermined by human decisions.  Are we really ready to put our privacy, our security or even our lives into the hands of machines?  And if so, what kind of machines?

Life may be mostly a routine affair.  In fact, over centuries great efforts have been spent to let all of our activities be governed by standards, rules and regularity.  Every now and then, however, normal operations are disrupted by events without a precedent, by emergencies that are not in the books and have never happened before in that particular form, yet need immediate critical action.  What do we humans do to cope with emergencies?  We go back to first principles and distantly related precedence.  We imagine the consequences of alternate options and relate them to our own goals and preferences, to ethics and law.  And then act accordingly.

The Mechanism of Intelligence

This ability to relate concrete situations to abstract goals is the essence of intelligence.  It lets us apply our basic repertoire of intentions and preferences to cope with novel situations.  The sciences that study human and animal behavior – psychology, anthropology, ethology – all share this view and illustrate it with a tremendous wealth of particular studies.

Present-day AI, however, in all its algorithmic or neural or Bayesian or machine learning or whatever forms, is unable to establish the necessary umbilical cord between the details of a scene that relate to an emerging problem and a repertoire of abstract goals and principles.  If you are ready to identify this repertoire with emotions, then it is plain that machines should have emotions.  And so far, they don’t.  This is the elephant in the room of AI.

There is only one conclusion to be drawn from this: hand over critical decisions to a machine only if it has emotions, personality and character (of the right kind, of course!).  If your autonomous car is unable to solidly relate the traffic scene to behavioral options and to weigh those options in the light of good and bad in our human sense, then don’t let loose of the steering wheel!

Classical AI had it right

Classical AI was certainly on the right track when basing much of its work on the concept of applying schemas (the frames of M. Minsky, the scripts of R. Schank, etc.)  to concrete situations to make sense of them.  Why didn’t this approach come to fruition? What went wrong in the early 80s? What is needed to revive it?

First, classical AI was stuck with toy applications because it didn’t have natural environment perception, that is, the ability to link raw sensory signals to abstract representations.  We still don’t have computer vision in this sense (or we would be able to invert computer graphics), nor do we have auditory scene analysis.

Second, classical AI believed in manual construction instead of learning: it relied on human-designed algorithms and manual entry of common knowledge.  The efficiency with which infants absorb the structure of their environment is unmatched to this day.

Third, classical AI worked with graphs of coarse nodes and links where a generic, fine-grained data format for knowledge representation is needed.  It believed in symbols and rejected neurons.  In the case symbols vs. neurons the ruling isn’t out yet. You cannot live without rich detail (pro neural population, con symbol) and you cannot live without the equivalent of symbolic operations (pro symbol, con neural population).

The three issues will have to be solved in one blow, of course.  Efficient learning from sparse examples needs perception, the extraction of the abstract essence of scenes.  Perception, on the other hand, needs learning, as the tremendous ambiguity of the poor sensory input can only be resolved with reference to memory patterns that were created with the help of thorough multi-modal examination.  And this boot-strapping process of perceiving to learn – learning to perceive can only work if there is a universal data format able to absorb and express the structure of the world.

This issue, a universal data structure for mental content combining the strengths of symbols and neurons may well be the key question standing between us and true intelligence in silico.

On to true Autonomy
in the Machine

An autonomous machine is, of course, an oxymoron.  By definition machines are made to serve as dependable tools in human hands without surprises (unlike the capricious elevator in Hitchhiker’s Guide to the Universe) and, well, autonomy.  So maybe we should get used to speaking of electronic organisms instead.  If we want them to be intelligent we will have to give them autonomy.  If we want their autonomy to serve our purposes, we have to describe these in abstract terms, such as law and ethics do – that is, we will have to endow our electronic servants with emotions and character as well as the mechanisms to exert these in situations as they arise.

There is no fundamental barrier to goal-oriented electronic organisms.  

Maybe we only have to put certain prejudices out of the way.