mockup_platonite_2_done

AI: Nothing but Tinned Human Thought?

In spite of all the feverish talk about it these days, true artificial intelligence is still remaining a fancy. We know very well what to expect from an autonomously behaving animal or of a human being, yet it is only all too plain that when dealing with digital devices we get nothing but long past human thought trying to foresee the current situation.

We will never have fully autonomous vehicles and robots without breaking through this glass ceiling, and even now we live with severe social and economical restrictions.

Confess, even if you talk to a digital agent in plain language you feel very well that all the reactions you’ll ever get have long been prepared, that you are talking to a menu, and as soon as you deviate from standard situations you’d rather contact a real person.

Before looking beyond the glass ceiling, let me argue my case. Two mutually jealous fields have been attempting over the past 5 or 6 decades to emulate human intelligence: one based on the algorithmic approach (AI in the narrow sense), one based on artificial neural networks (ANNs).

Algorithmic AI

In the algorithmic approach, a human programmer examines a particular problem to understanding the essence of it, and then fuses this essence into an algorithm.

Execution of this algorithm is then interpreted as displaying intelligence, but it is, of course, in reality canned human intelligence.

All possible situations coming up during application have to be considered and dealt with ahead of time.

The art and ambition of this approach is, of course, to subsume large arrays of particular cases under common abstract structural principles. And over the decades, computer science has, to be sure, made enormous strides in this direction, enlarging the scope of certain program systems by many orders of magnitude (however that could be quantified).

The most impressive achievements in this direction are software tools capturing the architecture of the recurring patterns and operations within large domains. Just think of tools for text processing, computer algebra, accounting, computer graphics, building design or even for software development itself.

The judgement that algorithmic AI is nothing but recorded human intelligence would, of course, totally lose its weight if a fundamental algorithm was written whose application domain was just as wide as that of the human mind. This had indeed been the ambition of the founders of AI, Minsky, Winograd, Shank, Simon and Newell, thinking in terms of abstract schemas, frames or scripts and in envisioning General Problem Solvers and cognitive architectures.

This is certainly the direction to go, and these attempts will need to be revived. In a later blog I will discuss why these approaches stalled in the late Seventies. Among those reasons (apart from grossly insufficient computer power) was lack of recognition of the need for a data structure general enough to form a common basis for all things mental (perhaps the core issue) and the lack of recognition of the importance of learning from example.

Both of these points are emphasized by ANNs, the eternal competitor to algorithmic AI.

Artificial Neural Networks

ANNs derive their romantic appeal from the claim of being inspired by the brain itself. They come along with a fundamental data structure aspiring to be the one used by the mind to express all the mind is able to express, and they come along with mechanisms for learning from example.

Also ANNs have long been held back by lack of computer power and, in any case in their current form, by lack of massive amounts of data. Well, both have become available, and at least the Digital Giants have at their disposal now million-processor clusters and big data.

Deep learning (DL) is the rave and is seen by many as the gateway to true intelligence in the machine.

In spite of all the hype and enthusiasm, DL unfortunately doesn’t escape the judgement of being nothing but tinned human intelligence.

Any trained deep learning system is domain specific, the domain and the problem to be solved being defined by the training data set.

In order to learn to classify objects into ten thousand different types a DL system needs hundreds of millions of hand-labeled images, each prominently containing one or a small number of objects.

I say the intelligence lies in the brains of the internet users who did the labelling.

Intelligent Design

The old argument against Darwin, that a complex entity like an organism or a watch necessarily implies the existence of an intelligent designer relates to a very deep-seated prejudice  that still dominates much thinking about AI as well.

This prejudice has it that an artefact can never be more intelligent than the maker.

Another expression of it is the old quip that if the brain were simple enough for us to understand it, we would be too simple to understand it. Don’t laugh, probe into your own mind whether it isn’t also your basic attitude. The argument denies the very existence of creativity, the emergence of novel ideas or structures.  

But novel ideas come up, both in evolution and in the mind, and the mechanism is random search in a highly structured universe of patterns.

The bitter dispute between Intelligent Design and Darwinism could lose its acrimony if it was realised that the genetic apparatus defines a highly constrained search space [see Evolution blind?].

Likewise, our attitudes concerning (artificial) intelligence could change fundamentally if it turned out that the space in which the mind operates was potently constrained.

This would then be the basis for realising the dream of Classical AI of coming up with a General Problem Solver, with an architecture that was as general as the human mind and yet constrained enough not to get lost in endless search.

DL may be seen in analogy to Darwinian evolution.

In versions applying drop-out (where a random subset of connections is temporarily silenced in each learning step) the system can be interpreted as a population of variants.

The analog of Darwinian fitness of a variant is represented by the human-provided teacher signal, and the likelihood of a variant that is unfit, in the sense of producing erroneous signals, will gradually give way to fitter variants with lower error probability.  

The architecture of a DL system consists in constraints on the connectivity (layers with feed-forward connectivity, max pooling, and the convolutional constraint in image applications).

The fact that a DL system has to be shown millions of samples of a certain entity, e.g., an object type to be classified from an image, where humans from the age of 3 onwards need a single inspection, disqualifies DL systems from being intelligent.

Evidently the search space of DL systems has not enough structure, is way too wide and doesn’t exploit the regularities of the world we live in, leading to long learning times and poor generalizations.

That’s why DL needs so much human hand-holding, in effect, intelligent design.

Back-Seat Intelligence

The fact that “machine intelligence” is, as I argue, totally dependent on the human mind, either by being programmed or by statistical observation of human behavior, assigns to it a back-seat role.

Machines have, so far, no direct access to the reality of an application domain, live in a encoded world, and their functional repertoire is formed by past human involvement.  

This has important consequences.

Number one, only routine functions are ever realised, number two, there is a long delay between the generation and the usage of a function.

If any new problem turns up, if any bug turns up, the loop from the customer to the company and back takes months at best. One of the hallmarks of intelligence is the ability to deal with novel situations on the spot, we use to call someone stupid who is limited to routine action.

True, that many complex human abilities (like seeing, speaking, playing the piano or acting responsibly when driving a car) also need considerable preparation time, but then humans are able to adapt those abilities to novel contexts:

Our language is infinitely flexible in its adaptation to context, and the pilot who landed his plane on the Hudson in an emergency could not base his decision on an exact precedent.

Machines, in distinction, have to stick to protocols and when it gets critical humans have to be in the loop.

This is bothersome or costly, but our personal life is much more affected by another factor.

Developing software is expensive and applications are only addressed if there is a market for them whereas the collection of big data for training needs already a large community, a market in a way.

A search engine, for example, having no direct access to contents and not being able to understand it has to rely on the statistical observation of the behavior of billions of users. It is true that part of at least the Western worldview is trying to belong, to be like others, to be in the trend, but do we really want to be restricted to what the masses do?

So we may actually find ourselves between a rock and a hard place.

Our human life may be gravely endangered by true intelligence in the machine, a serious concern, but our personal freedom is compromised already now by lack of true intelligence in the machine.

mockup_platonite_2_done

AI: Nothing but
Tinned Human Thought?

In spite of all the feverish talk about it these days, true artificial intelligence is still remaining a fancy. We know very well what to expect from an autonomously behaving animal or of a human being, yet it is only all too plain that when dealing with digital devices we get nothing but long past human thought trying to foresee the current situation.

We will never have fully autonomous vehicles and robots without breaking through this glass ceiling, and even now we live with severe social and economical restrictions.

Confess, even if you talk to a digital agent in plain language you feel very well that all the reactions you’ll ever get have long been prepared, that you are talking to a menu, and as soon as you deviate from standard situations you’d rather contact a real person.

Before looking beyond the glass ceiling, let me argue my case. Two mutually jealous fields have been attempting over the past 5 or 6 decades to emulate human intelligence: one based on the algorithmic approach (AI in the narrow sense), one based on artificial neural networks (ANNs).

Algorithmic AI

In the algorithmic approach, a human programmer examines a particular problem to understanding the essence of it, and then fuses this essence into an algorithm.

Execution of this algorithm is then interpreted as displaying intelligence, but it is, of course, in reality canned human intelligence.

All possible situations coming up during application have to be considered and dealt with ahead of time.

The art and ambition of this approach is, of course, to subsume large arrays of particular cases under common abstract structural principles. And over the decades, computer science has, to be sure, made enormous strides in this direction, enlarging the scope of certain program systems by many orders of magnitude (however that could be quantified).

The most impressive achievements in this direction are software tools capturing the architecture of the recurring patterns and operations within large domains. Just think of tools for text processing, computer algebra, accounting, computer graphics, building design or even for software development itself.

The judgement that algorithmic AI is nothing but recorded human intelligence would, of course, totally lose its weight if a fundamental algorithm was written whose application domain was just as wide as that of the human mind. This had indeed been the ambition of the founders of AI, Minsky, Winograd, Shank, Simon and Newell, thinking in terms of abstract schemas, frames or scripts and in envisioning General Problem Solvers and cognitive architectures.

This is certainly the direction to go, and these attempts will need to be revived. In a later blog I will discuss why these approaches stalled in the late Seventies. Among those reasons (apart from grossly insufficient computer power) was lack of recognition of the need for a data structure general enough to form a common basis for all things mental (perhaps the core issue) and the lack of recognition of the importance of learning from example.

Both of these points are emphasized by ANNs, the eternal competitor to algorithmic AI.

Artificial Neural Networks

ANNs derive their romantic appeal from the claim of being inspired by the brain itself. They come along with a fundamental data structure aspiring to be the one used by the mind to express all the mind is able to express, and they come along with mechanisms for learning from example.

Also ANNs have long been held back by lack of computer power and, in any case in their current form, by lack of massive amounts of data. Well, both have become available, and at least the Digital Giants have at their disposal now million-processor clusters and big data.

Deep learning (DL) is the rave and is seen by many as the gateway to true intelligence in the machine.

In spite of all the hype and enthusiasm, DL unfortunately doesn’t escape the judgement of being nothing but tinned human intelligence.

Any trained deep learning system is domain specific, the domain and the problem to be solved being defined by the training data set.

In order to learn to classify objects into ten thousand different types a DL system needs hundreds of millions of hand-labeled images, each prominently containing one or a small number of objects.

I say the intelligence lies in the brains of the internet users who did the labelling.

Intelligent Design

The old argument against Darwin, that a complex entity like an organism or a watch necessarily implies the existence of an intelligent designer relates to a very deep-seated prejudice  that still dominates much thinking about AI as well.

This prejudice has it that an artefact can never be more intelligent than the maker.

Another expression of it is the old quip that if the brain were simple enough for us to understand it, we would be too simple to understand it. Don’t laugh, probe into your own mind whether it isn’t also your basic attitude. The argument denies the very existence of creativity, the emergence of novel ideas or structures.  

But novel ideas come up, both in evolution and in the mind, and the mechanism is random search in a highly structured universe of patterns.

The bitter dispute between Intelligent Design and Darwinism could lose its acrimony if it was realised that the genetic apparatus defines a highly constrained search space [see Evolution blind?].

Likewise, our attitudes concerning (artificial) intelligence could change fundamentally if it turned out that the space in which the mind operates was potently constrained.

This would then be the basis for realising the dream of Classical AI of coming up with a General Problem Solver, with an architecture that was as general as the human mind and yet constrained enough not to get lost in endless search.

DL may be seen in analogy to Darwinian evolution.

In versions applying drop-out (where a random subset of connections is temporarily silenced in each learning step) the system can be interpreted as a population of variants.

The analog of Darwinian fitness of a variant is represented by the human-provided teacher signal, and the likelihood of a variant that is unfit, in the sense of producing erroneous signals, will gradually give way to fitter variants with lower error probability.  

The architecture of a DL system consists in constraints on the connectivity (layers with feed-forward connectivity, max pooling, and the convolutional constraint in image applications).

The fact that a DL system has to be shown millions of samples of a certain entity, e.g., an object type to be classified from an image, where humans from the age of 3 onwards need a single inspection, disqualifies DL systems from being intelligent.

Evidently the search space of DL systems has not enough structure, is way too wide and doesn’t exploit the regularities of the world we live in, leading to long learning times and poor generalizations.

That’s why DL needs so much human hand-holding, in effect, intelligent design.

Back-Seat Intelligence

The fact that “machine intelligence” is, as I argue, totally dependent on the human mind, either by being programmed or by statistical observation of human behavior, assigns to it a back-seat role.

Machines have, so far, no direct access to the reality of an application domain, live in a encoded world, and their functional repertoire is formed by past human involvement.  

This has important consequences.

Number one, only routine functions are ever realised, number two, there is a long delay between the generation and the usage of a function.

If any new problem turns up, if any bug turns up, the loop from the customer to the company and back takes months at best. One of the hallmarks of intelligence is the ability to deal with novel situations on the spot, we use to call someone stupid who is limited to routine action.

True, that many complex human abilities (like seeing, speaking, playing the piano or acting responsibly when driving a car) also need considerable preparation time, but then humans are able to adapt those abilities to novel contexts:

Our language is infinitely flexible in its adaptation to context, and the pilot who landed his plane on the Hudson in an emergency could not base his decision on an exact precedent.

Machines, in distinction, have to stick to protocols and when it gets critical humans have to be in the loop.

This is bothersome or costly, but our personal life is much more affected by another factor.

Developing software is expensive and applications are only addressed if there is a market for them whereas the collection of big data for training needs already a large community, a market in a way.

A search engine, for example, having no direct access to contents and not being able to understand it has to rely on the statistical observation of the behavior of billions of users. It is true that part of at least the Western worldview is trying to belong, to be like others, to be in the trend, but do we really want to be restricted to what the masses do?

So we may actually find ourselves between a rock and a hard place.

Our human life may be gravely endangered by true intelligence in the machine, a serious concern, but our personal freedom is compromised already now by lack of true intelligence in the machine.