Two camps are vying for the wreath of creating artificial intelligence: one is centered on the computer as its intellectual focus, the other is emphasizing studying the brain.  Which of the two camps will win?

The Contestants:
A The Computer Camp.

The computer camp has a very strong argument in its favor: Universality.  Whatever the functional principle of the mind or intelligence, the computer is guaranteed to be able to accommodate it.  The limit is only the insight and the imagination of the programmer’s brain.  It is impressive how much of what once was exclusive mind territory has been claimed by the computer, transforming much of our life.  The limitlessness of universality and some of the spectacular successes of computer-based AI have at times lead to great arrogance in this camp, a feeling of superiority supported by the computer’s tremendous speed and accuracy.

And yet, we all know the computer has still not been programmed to emulate even children in terms of natural language understanding or situation-awareness.  It could just be a quantitative issue, a matter of sufficient software volume.  This is what motivates the software industry, which exploits all available talent to claim further territory for automation.  In its original enthusiasm, the field believed that whole application domains could be captured by simple sets of rules, just as numbers are captured by a few laws of arithmetic.  Unfortunately, that hope had to die in bitter disappointment.  Natural language, for instance, is riddled with irregularities, and vision is very far from being captured by simple laws of geometry and optics.  The computer camp lost steam and started to leer at the other, the brain camp.

B The Brain Camp

That camp also started with a sure-win strategy: our best and only example of intelligence is the human brain – just learn how it does it.  Two things are clear about it.  It is composed of neurons, which are simple threshold elements, lots and lots of them, and is based on learning from example and not on anything like programming (except perhaps at a very general and fundamental level, by the genes).  There is great consensus in the field about the basic mechanism of learning, Hebbian plasticity, the strengthening of connections between neurons that are often activated at the same time.  Another obvious point about the brain is that neurons can be interpreted as simple symbols: you can record the activity of any neuron in the brain and with enough patience you will find a feature or a topic in response to which the neuron fires.

The brain camp thus seems to be in possession of a clear cognitive architecture: The state of the brain at a given moment is defined as the set of neurons that are currently active, each contributing its elementary symbolic meaning to the state.  This state is generated from moment to moment by the excitatory and inhibitory signals that the neurons receive from each other and from sensory input.  The memory resides in the strengths of the connections between neurons, and the mechanism of memory formation is Hebbian plasticity.  Perfect.

A See-Saw History

The two camps have had a lively history over the last 50 years (although their roots reach deeper into the past), their vital signs going through violent swings, usually in see-saw fashion.  At the present time, the neurosciences thrive, are well funded, are extremely creative in terms of the most incredible new experimental procedures, and there are lots of positions also for theoretical neuroscientists (in distinction to, say, the 1970ies, where there were none whatsoever!).  But if you get to talk over a beer to essential exponents of the field you will find out that at the same time there is, under that healthy surface, great pessimism about the prospect of ever being able to understand the function of the brain.

We are thus in a very curious state.  The computer camp is totally disheartened, steamrollered over by the brain camp’s deep learning, but the neuroscientists themselves are doubtful that deep learning could get us any nearer to the goal.  More superficial reasons for their doubt are that back-propagation of error, the learning method behind deep learning, is not implemented in the brain at all as far as is known, and that there are lots of within-layer connections within cortical areas (especially primary visual cortex), in distinction to the exclusive feed-forward or feed-back connections between layers in all deep learning systems, suggesting a totally different kind of neural dynamics.

A deeper reason for the brain camp’s mistrust in its own temporary victory is that deep learning doesn’t replicate the tremendous flexibility of humans or even animals in coping with unexpected situations.  The output sheet of a multi-layered perceptron is composed of a finite number of units, and the response to the pattern presented at the input sheet consists in activating a subset of those units.  A network can be trained to predict with high accuracy the human majority reaction to the same input and it is thus possible to train into a system reactions to a finite collection of situations, situations for each of which a sufficient number of samples has previously been collected.  Deep learning systems are thus limited to what has been pre-masticated for them.  This is not what we call intelligence, the ability to cope with unexpected situations.

Here we stand…

Both camps thus had to find out that their approach is too pedestrian, that arbitrary functionality can be put in the machine, but at the price of tremendous human effort, either in terms of software development or in terms of sample collection.  Tremendous expense as the price to be paid for functional flexibility could be accepted as being just the way it is, as part of the nature of things, if it wasn’t for the human brain to demonstrate that flying is possible, that novel situations can evidently be mastered by relating them to abstract principles and goal projections.  (The very software generation of the computer camp and the crowd sourcing of the brain camp would, of course, not be possible without the human brain’s ability to fly!)

Where do we go from here?  If you are a young person, take this situation as a tremendous opportunity: Don’t feel crushed by what has been said and done on the AI front.  The field is wide open.  Don’t let yourself be dragged too deeply into one of the current modes of thinking.  It needs a fresh approach.  Think!  All it takes is to find a tiny hidden door in the seemingly smooth wall in front of us.

Computer or Brain?

Of course, this was a silly question.  Even if we were not interested in applications we’d need the computer to try out and demonstrate our ideas about the brain.  And without the human brain as proof of existence the quest for artificial intelligence wouldn’t exist in the first place.  To make headway, we need everything the computer and the brain-and-mind communities have to offer.  But we also have to be prepared that both may be limited by a mindset that is excluding the solution.

Here is what I personally think is the key issue: the data structure of the brain.  All we experience in life is brain states.  Everything we can see or hear or smell or dream or imagine, it all is expressed as physical configurations of neural signals.  What is this neural code?

Take the computer version: bits.  There is no limit to what bits can express, they are universal.  But this universality may also be a weakness: bits have to be told everything by software, they have no tendency whatsoever to fall into any kind of preferred state.  Take the brain camp’s favorite: neurons as elementary symbols.  They can be trained by general mechanisms to fall into preferred states, but their expressive power is very limited: the brain’s state, according to this view, is a pool of active neurons, a vector of activity, a very limited data structure indeed. By contrast, all symbol systems ever created by humans are based on elementary symbols (e.g., letters) plus a flexible means of combining those elements in structured ways into more complex symbols (words, phrases, sentences, …).  The presently dominant views of the neural data structure have no such means of building up structured complex-symbols, a predicament which has been called the binding problem.

What is needed, then, is a concept how neurons fall, under the influence of a ubiquitous and general mechanism, into structured arrangements which act as the Lego blocks of the mind, themselves being endowed with the tendency to combine into ever more complex structures.  In a word, all it takes to create AI is combining the expressive power of bits with the self-organization of neurons.

Two camps are vying for the wreath of creating artificial intelligence: one is centered on the computer as its intellectual focus, the other is emphasizing studying the brain.  Which of the two camps will win?

The Contestants:
A The Computer Camp.

The computer camp has a very strong argument in its favor: Universality.  Whatever the functional principle of the mind or intelligence, the computer is guaranteed to be able to accommodate it.  The limit is only the insight and the imagination of the programmer’s brain.  It is impressive how much of what once was exclusive mind territory has been claimed by the computer, transforming much of our life.  The limitlessness of universality and some of the spectacular successes of computer-based AI have at times lead to great arrogance in this camp, a feeling of superiority supported by the computer’s tremendous speed and accuracy.

And yet, we all know the computer has still not been programmed to emulate even children in terms of natural language understanding or situation-awareness.  It could just be a quantitative issue, a matter of sufficient software volume.  This is what motivates the software industry, which exploits all available talent to claim further territory for automation.  In its original enthusiasm, the field believed that whole application domains could be captured by simple sets of rules, just as numbers are captured by a few laws of arithmetic.  Unfortunately, that hope had to die in bitter disappointment.  Natural language, for instance, is riddled with irregularities, and vision is very far from being captured by simple laws of geometry and optics.  The computer camp lost steam and started to leer at the other, the brain camp.

B The Brain Camp

That camp also started with a sure-win strategy: our best and only example of intelligence is the human brain – just learn how it does it.  Two things are clear about it.  It is composed of neurons, which are simple threshold elements, lots and lots of them, and is based on learning from example and not on anything like programming (except perhaps at a very general and fundamental level, by the genes).  There is great consensus in the field about the basic mechanism of learning, Hebbian plasticity, the strengthening of connections between neurons that are often activated at the same time.  Another obvious point about the brain is that neurons can be interpreted as simple symbols: you can record the activity of any neuron in the brain and with enough patience you will find a feature or a topic in response to which the neuron fires.

The brain camp thus seems to be in possession of a clear cognitive architecture: The state of the brain at a given moment is defined as the set of neurons that are currently active, each contributing its elementary symbolic meaning to the state.  This state is generated from moment to moment by the excitatory and inhibitory signals that the neurons receive from each other and from sensory input.  The memory resides in the strengths of the connections between neurons, and the mechanism of memory formation is Hebbian plasticity.  Perfect.

A See-Saw History

The two camps have had a lively history over the last 50 years (although their roots reach deeper into the past), their vital signs going through violent swings, usually in see-saw fashion.  At the present time, the neurosciences thrive, are well funded, are extremely creative in terms of the most incredible new experimental procedures, and there are lots of positions also for theoretical neuroscientists (in distinction to, say, the 1970ies, where there were none whatsoever!).  But if you get to talk over a beer to essential exponents of the field you will find out that at the same time there is, under that healthy surface, great pessimism about the prospect of ever being able to understand the function of the brain.

We are thus in a very curious state.  The computer camp is totally disheartened, steamrollered over by the brain camp’s deep learning, but the neuroscientists themselves are doubtful that deep learning could get us any nearer to the goal.  More superficial reasons for their doubt are that back-propagation of error, the learning method behind deep learning, is not implemented in the brain at all as far as is known, and that there are lots of within-layer connections within cortical areas (especially primary visual cortex), in distinction to the exclusive feed-forward or feed-back connections between layers in all deep learning systems, suggesting a totally different kind of neural dynamics.

A deeper reason for the brain camp’s mistrust in its own temporary victory is that deep learning doesn’t replicate the tremendous flexibility of humans or even animals in coping with unexpected situations.  The output sheet of a multi-layered perceptron is composed of a finite number of units, and the response to the pattern presented at the input sheet consists in activating a subset of those units.  A network can be trained to predict with high accuracy the human majority reaction to the same input and it is thus possible to train into a system reactions to a finite collection of situations, situations for each of which a sufficient number of samples has previously been collected.  Deep learning systems are thus limited to what has been pre-masticated for them.  This is not what we call intelligence, the ability to cope with unexpected situations.

Here we stand…

Both camps thus had to find out that their approach is too pedestrian, that arbitrary functionality can be put in the machine, but at the price of tremendous human effort, either in terms of software development or in terms of sample collection.  Tremendous expense as the price to be paid for functional flexibility could be accepted as being just the way it is, as part of the nature of things, if it wasn’t for the human brain to demonstrate that flying is possible, that novel situations can evidently be mastered by relating them to abstract principles and goal projections.  (The very software generation of the computer camp and the crowd sourcing of the brain camp would, of course, not be possible without the human brain’s ability to fly!)

Where do we go from here?  If you are a young person, take this situation as a tremendous opportunity: Don’t feel crushed by what has been said and done on the AI front.  The field is wide open.  Don’t let yourself be dragged too deeply into one of the current modes of thinking.  It needs a fresh approach.  Think!  All it takes is to find a tiny hidden door in the seemingly smooth wall in front of us.

Computer or Brain?

Of course, this was a silly question.  Even if we were not interested in applications we’d need the computer to try out and demonstrate our ideas about the brain.  And without the human brain as proof of existence the quest for artificial intelligence wouldn’t exist in the first place.  To make headway, we need everything the computer and the brain-and-mind communities have to offer.  But we also have to be prepared that both may be limited by a mindset that is excluding the solution.

Here is what I personally think is the key issue: the data structure of the brain.  All we experience in life is brain states.  Everything we can see or hear or smell or dream or imagine, it all is expressed as physical configurations of neural signals.  What is this neural code?

Take the computer version: bits.  There is no limit to what bits can express, they are universal.  But this universality may also be a weakness: bits have to be told everything by software, they have no tendency whatsoever to fall into any kind of preferred state.  Take the brain camp’s favorite: neurons as elementary symbols.  They can be trained by general mechanisms to fall into preferred states, but their expressive power is very limited: the brain’s state, according to this view, is a pool of active neurons, a vector of activity, a very limited data structure indeed. By contrast, all symbol systems ever created by humans are based on elementary symbols (e.g., letters) plus a flexible means of combining those elements in structured ways into more complex symbols (words, phrases, sentences, …).  The presently dominant views of the neural data structure have no such means of building up structured complex-symbols, a predicament which has been called the binding problem.

What is needed, then, is a concept how neurons fall, under the influence of a ubiquitous and general mechanism, into structured arrangements which act as the Lego blocks of the mind, themselves being endowed with the tendency to combine into ever more complex structures.  In a word, all it takes to create AI is combining the expressive power of bits with the self-organization of neurons.