The brain’s signals are analog, those in the computer are digital. Neural signals are noisy and unreliable, digital switches are designed to be absolutely reliable. It is of great irony that on a global level this relationship inverts: in unstructured natural environments the brain performs reliably whereas the computer fails. Is this a freak or is there something magic about analog signals?

Digital systems rely on absolute reliability of signals. The process in the machine has to conform strictly to the program – if it erred from the critical path set by the program by only a single bit in the typical chain of millions of switching operations there would be nothing in the machine to get it back on track. The computer by itself is completely blind to sense and purpose, making it totally dependent on the analog world of humans.

As the brain has to generate its own sense and purpose anyway, the condition of determinacy may be greatly relaxed. But given the level of noise and vagueness of its analog signals, isn’t the nervous system taking it a bit too easy? How can the brain still get organized, how can it make sense and pursue general goals in widely varying situations? Evidently it is not totally drowned in noise and uncertainty, is able to think coherently and perform complex tasks such as steering a car through dangerous situations. The strategy of the brain must be the opposite of that of digital systems in a rather fundamental way.

Uncertainty starts with sensory signals. Although my eye is a superbly engineered organ and its output neurons function with fair reliability, its signals are highly ambiguous with respect to the intrinsic scene quantities I am interested in – distances, shapes, colors and so on – and have little to say about identities and properties or functional significance of objects. The core of the difficulty is that there is no straightforward way from the sensory signals to the representation of intrinsic properties of the scene. It is rather like in a riddle: once you have the solution it clearly fits the bill, but getting there is a matter of searching needles in haystacks: computer graphics can generate image sequences from scene descriptions in a deterministic process, but the inverse, going from the images to the scene description, needs guesswork.

The brain is a vast set of neurons. According to an extensive body of experiments each of them can be seen as a symbol or hypothesis that either is or isn’t relevant to the current situation. If one of them is active, thus signaling that it is relevant, its excitatory or inhibitory connections to other neurons can be seen as arguments for resp. against the relevance of those other neurons. If a certain neuron expresses the hypothesis that this point on a smooth surface in front of me is at a certain distance, then excitatory connections could encourage neurons relating to neighboring points and similar distance to feel they were relevant, too.

Looking at the grand picture, a valid interpretation of a situation is a large set of active neurons that excite each other while all other neurons are inhibited (if they receive excitation in the first place). In other words, situations are represented by large numbers of relevant hypotheses, each referring to an elementary fact, supporting each other’s relevance by positive arguments, while other hypotheses, although receiving support from the sensory evidence, are voted down as currently irrelevant by negative arguments. (Most of these negative arguments connect mutually exclusive facts, like different distances referring to the same point on a surface.)

The perceptual process (or the process of coping with situations in general) amounts to collapsing vast clouds of hypotheses aroused by initial evidence, leaving active only tiny subsets that support each other by mutual arguments.

Let’s reflect back on our discussion of digital vs. analog. Whether a hypothesis is currently relevant or not is a voting game in which each individual signal on its own is of little weight and reliability, many signals flickering in and out as hypotheses appearing relevant for a moment until ruled out by competitors and contradictions. The temporary lack of determinism of signals in the analog brain thus is unavoidable. The enormous reliability of the global brain process, however, the reason that we place ultimate trust in human involvement instead of in digital automata when critical novel situations are to be mastered is due to the brain’s sensitivity to all relevant aspects of a situation and its ability to settle into states of mutual consistency between all of them.

The Wisdom of Connections

Although I haven’t spoken of it yet, this all has to rely on the structure – the wisdom, so to speak – of the neural connections. Brain dynamics cannot possibly project out a consistent subset from the original jumble of alternate hypotheses if the connectivity structure doesn’t support it, if there are too many connective pathways that contradict each other. A mathematical structure, to use an analogy that I feel is deeply relevant, would break down in its entirety if a proposition could both be proved as true and as false by two lines of argument. Consistency is the ultimate criterion for the validity of a mathematical structure and is also essential for the quality of the brain’s connectivity.

Here I hear you crying, wait a minute, internal consistency of the brain is one thing, but what about its relation to the outside world? Well, isn’t that also a matter of consistency? A baseball player running to catch a flying ball can only succeed if the signal pathways in his visual and motor systems are consistent with the causal chains in his body and the external world.

Consistency of connections in the brain is achieved by learning and self-organization. The underlying mechanism is synaptic plasticity: the strengthening of connections exerting appreciable effects on their target neurons. Now, appreciable effects on neurons need simultaneous activity in a dozen (or so) converging fibers. As a consequence, alternate pathways from the same source to the same target will help each other to grow. To prevent all-to-all connectivity, this growth is balanced by competition between simultaneously active converging connections, a force driving towards sparse connectivity patterns. The process just described is called network self-organization. It creates well-organized connectivity patterns called net fragments.

Network self-organization is relatively slow, but once net fragments have been formed they can be activated literally at an eye blink. Net fragments are the Lego blocks of the mind. They are chunks of mental structure that readily combine in ever new arrangements into larger nets, nets that again have the same structure that is stable under self-interaction. This is the way our brain creates situation awareness: it activates nets of immense numbers of hypothesis-neurons that cover all aspects of the situation and are mutually supportive by virtue of chains of arguments – synaptic pathways that are consistent with each other.

Apply this to perception: The sensory input is highly ambiguous and creates a large cloud of hypotheses. Within this cloud there is a sparse subset of neurons that form net fragments. Out of these net fragments a still sparser subset fits together into a global net. Activating this global net competitively silences all other fragments and neurons that are not part of consistent networks. The global net thus established interprets and explains the sensory input in terms of hypotheses relating to the external reality and that are consistent with each other in terms of previous experience coded into synaptic connections.

Of course, all of this can be simulated on a digital computer. But that goes against the grain of the brain process. Letting neural membrane potentials drift through intermediate values until they reach a certain momentary target value consumes very little energy. Representing them by floating point numbers that are to be updated again and again and to be whirled around in the digital switchyard of the computer consumes a lot of power. The brain has a dedicated molecular processor for each neural membrane patch and for each synapse, and all of its operations are in this sense fully parallelized. All these nano-processors are connected by axons, ionic wires, but very little information is actually moved around: not the values of quantities (synaptic weights, membrane potentials etc.) but only their updates, communicated by terse signals in the form of neural pulses. The reduced precision requirements for individual variables and the parsimony of actual signal transport make it possible for the brain to realize an enormous computational power (Peta-Ops, if each of the 1015 synapses of the brain are assumed to be ticking once per second) consuming less than 100 Watts. This calls for a completely new computational infrastructure as basis for true AI.

Apart from these technical considerations, the differences between the analog and the digital world are fundamental. The digital structures in our devices are only viable by being connected by umbilical cords with developers that are distant in space and in time and yet exert detailed control over my cell phone, laptop or eventually “autonomous” car, with grave sociological consequences. (BTW, this is also the case with present-day neural networks and deep learning, see). Future Analog devices that are able to perceive their immediate environment can learn and react to it directly will generate a totally different culture in which we users can emerge again from the straight-jacket of globally homogeneous user communities.

BTW: I promised in my last blog to describe the mechanism by which abstract schemas are applied to concrete situations. This would be the point to do so, but I don’t want to bloat this blog and ask for your patience.

The brain’s signals are analog, those in the computer are digital. Neural signals are noisy and unreliable, digital switches are designed to be absolutely reliable. It is of great irony that on a global level this relationship inverts: in unstructured natural environments the brain performs reliably whereas the computer fails. Is this a freak or is there something magic about analog signals?

Digital systems rely on absolute reliability of signals. The process in the machine has to conform strictly to the program – if it erred from the critical path set by the program by only a single bit in the typical chain of millions of switching operations there would be nothing in the machine to get it back on track. The computer by itself is completely blind to sense and purpose, making it totally dependent on the analog world of humans.

As the brain has to generate its own sense and purpose anyway, the condition of determinacy may be greatly relaxed. But given the level of noise and vagueness of its analog signals, isn’t the nervous system taking it a bit too easy? How can the brain still get organized, how can it make sense and pursue general goals in widely varying situations? Evidently it is not totally drowned in noise and uncertainty, is able to think coherently and perform complex tasks such as steering a car through dangerous situations. The strategy of the brain must be the opposite of that of digital systems in a rather fundamental way.

Uncertainty starts with sensory signals. Although my eye is a superbly engineered organ and its output neurons function with fair reliability, its signals are highly ambiguous with respect to the intrinsic scene quantities I am interested in – distances, shapes, colors and so on – and have little to say about identities and properties or functional significance of objects. The core of the difficulty is that there is no straightforward way from the sensory signals to the representation of intrinsic properties of the scene. It is rather like in a riddle: once you have the solution it clearly fits the bill, but getting there is a matter of searching needles in haystacks: computer graphics can generate image sequences from scene descriptions in a deterministic process, but the inverse, going from the images to the scene description, needs guesswork.

The brain is a vast set of neurons. According to an extensive body of experiments each of them can be seen as a symbol or hypothesis that either is or isn’t relevant to the current situation. If one of them is active, thus signaling that it is relevant, its excitatory or inhibitory connections to other neurons can be seen as arguments for resp. against the relevance of those other neurons. If a certain neuron expresses the hypothesis that this point on a smooth surface in front of me is at a certain distance, then excitatory connections could encourage neurons relating to neighboring points and similar distance to feel they were relevant, too.

Looking at the grand picture, a valid interpretation of a situation is a large set of active neurons that excite each other while all other neurons are inhibited (if they receive excitation in the first place). In other words, situations are represented by large numbers of relevant hypotheses, each referring to an elementary fact, supporting each other’s relevance by positive arguments, while other hypotheses, although receiving support from the sensory evidence, are voted down as currently irrelevant by negative arguments. (Most of these negative arguments connect mutually exclusive facts, like different distances referring to the same point on a surface.)

The perceptual process (or the process of coping with situations in general) amounts to collapsing vast clouds of hypotheses aroused by initial evidence, leaving active only tiny subsets that support each other by mutual arguments.

Let’s reflect back on our discussion of digital vs. analog. Whether a hypothesis is currently relevant or not is a voting game in which each individual signal on its own is of little weight and reliability, many signals flickering in and out as hypotheses appearing relevant for a moment until ruled out by competitors and contradictions. The temporary lack of determinism of signals in the analog brain thus is unavoidable. The enormous reliability of the global brain process, however, the reason that we place ultimate trust in human involvement instead of in digital automata when critical novel situations are to be mastered is due to the brain’s sensitivity to all relevant aspects of a situation and its ability to settle into states of mutual consistency between all of them.

The Wisdom of Connections

Although I haven’t spoken of it yet, this all has to rely on the structure – the wisdom, so to speak – of the neural connections. Brain dynamics cannot possibly project out a consistent subset from the original jumble of alternate hypotheses if the connectivity structure doesn’t support it, if there are too many connective pathways that contradict each other. A mathematical structure, to use an analogy that I feel is deeply relevant, would break down in its entirety if a proposition could both be proved as true and as false by two lines of argument. Consistency is the ultimate criterion for the validity of a mathematical structure and is also essential for the quality of the brain’s connectivity.

Here I hear you crying, wait a minute, internal consistency of the brain is one thing, but what about its relation to the outside world? Well, isn’t that also a matter of consistency? A baseball player running to catch a flying ball can only succeed if the signal pathways in his visual and motor systems are consistent with the causal chains in his body and the external world.

Consistency of connections in the brain is achieved by learning and self-organization. The underlying mechanism is synaptic plasticity: the strengthening of connections exerting appreciable effects on their target neurons. Now, appreciable effects on neurons need simultaneous activity in a dozen (or so) converging fibers. As a consequence, alternate pathways from the same source to the same target will help each other to grow. To prevent all-to-all connectivity, this growth is balanced by competition between simultaneously active converging connections, a force driving towards sparse connectivity patterns. The process just described is called network self-organization. It creates well-organized connectivity patterns called net fragments.

Network self-organization is relatively slow, but once net fragments have been formed they can be activated literally at an eye blink. Net fragments are the Lego blocks of the mind. They are chunks of mental structure that readily combine in ever new arrangements into larger nets, nets that again have the same structure that is stable under self-interaction. This is the way our brain creates situation awareness: it activates nets of immense numbers of hypothesis-neurons that cover all aspects of the situation and are mutually supportive by virtue of chains of arguments – synaptic pathways that are consistent with each other.

Apply this to perception: The sensory input is highly ambiguous and creates a large cloud of hypotheses. Within this cloud there is a sparse subset of neurons that form net fragments. Out of these net fragments a still sparser subset fits together into a global net. Activating this global net competitively silences all other fragments and neurons that are not part of consistent networks. The global net thus established interprets and explains the sensory input in terms of hypotheses relating to the external reality and that are consistent with each other in terms of previous experience coded into synaptic connections.

Of course, all of this can be simulated on a digital computer. But that goes against the grain of the brain process. Letting neural membrane potentials drift through intermediate values until they reach a certain momentary target value consumes very little energy. Representing them by floating point numbers that are to be updated again and again and to be whirled around in the digital switchyard of the computer consumes a lot of power. The brain has a dedicated molecular processor for each neural membrane patch and for each synapse, and all of its operations are in this sense fully parallelized. All these nano-processors are connected by axons, ionic wires, but very little information is actually moved around: not the values of quantities (synaptic weights, membrane potentials etc.) but only their updates, communicated by terse signals in the form of neural pulses. The reduced precision requirements for individual variables and the parsimony of actual signal transport make it possible for the brain to realize an enormous computational power (Peta-Ops, if each of the 1015 synapses of the brain are assumed to be ticking once per second) consuming less than 100 Watts. This calls for a completely new computational infrastructure as basis for true AI.

Apart from these technical considerations, the differences between the analog and the digital world are fundamental. The digital structures in our devices are only viable by being connected by umbilical cords with developers that are distant in space and in time and yet exert detailed control over my cell phone, laptop or eventually “autonomous” car, with grave sociological consequences. (BTW, this is also the case with present-day neural networks and deep learning, see). Future Analog devices that are able to perceive their immediate environment can learn and react to it directly will generate a totally different culture in which we users can emerge again from the straight-jacket of globally homogeneous user communities.

BTW: I promised in my last blog to describe the mechanism by which abstract schemas are applied to concrete situations. This would be the point to do so, but I don’t want to bloat this blog and ask for your patience.