To human beings, alien intelligence is an alien manifestation of intelligence. Artificial intelligence is regarded as alien intelligence if it has been programmed to follow different principles to those of human intelligence. The intelligence of dolphins and whales is alien. Any extraterrestrial intelligence might also be alien. Or, maybe there is no such thing as alien intelligence. One probable cause for the universality of intelligence is the convergence of evolution. Whatever their genetic background, large sea animals develop similar streamlined forms. Correspondingly, under the pressure of selection, intelligence may seek the same calculatory strategies. We do not know.
Philosopher Gottfried Wilhelm Leibniz (1646-1716) was among the first to understand the potential existence of unnatural super intelligence. He was also a builder of apparatuses, and his mechanical calculator was more developed than that of Pascal's. Most significant, however, was the 'first programming language' that Leibniz developed without a computer. A precursor of symbolic logic, the language was called Characteristica Universalis.
It is interesting that monads, the basic objects of Leibniz's metaphysics, were devoid of spatial dimensions, although they were conscious of each other. In monadology, the basic substance of reality is therefore composed of software entities, atoms with awareness.
Having realised the analogy between logical truth values (TRUE, FALSE) and between the numbers 1 and 0, George Boole founded symbolic logic. Logical deduction was transformed into arithmetic. Almost all modern computers are still based on Boolean algebra.
The idea of a mechanical principle, an algorithm that would solve all mathematical problems, became more dominant towards the end of the 19th century. In a set of 23 problems presented by David Hilbert in the year 1900, the 10th enquires whether there is a solution to a polynomial equation with integer coefficients. The solving of this problem was proved impossible in the 1970s.
The boundaries of calculation were set on a collision course. Kurt Gödel's incompleteness theorem in the 1930s was to first bombard these boundaries. Today one could say that there are more mathematical truths (an uncountable number) than there are proofs (a countable number), and thus there are not enough proofs for all truths. Alan Turing asked if a computer could deduce whether another computer would ever halt. The halting problem cannot be formally solved because it is impossible to build such a machine. In the 1970s, the question at issue was what kinds of tasks could be calculated in a 'reasonable' time. Today we know thousands of NP-hard problems in which the calculation time increases exponentially with the complexity of the problem.
To date, and despite setbacks, the Church-Turing thesis is still irrefutable. In essence, it argues that everything that is calculable, can be calculated with any simple logical machine, as long as the machine fulfils certain basic requirements. With adequate memory and time, even the simplest pocket calculator can perform the same tasks as any super computer, provided that it is given a conditional jump instruction in the programming language.
Roger Penrose disagrees. He claims that people are capable of things that machines are not. Penrose is looking for differences between the human mind and machines from quantum mechanics.
Computer and brains
Moore's Law, named after Intel's founder, Gordon Moore, argues that computer technology doubles in performance every two years. This law has held good for half a century now. Generation after generation, the performance capacity of microprocessors settles on a straight line on logarithm scale as predicted. The line will meet the calculating capacity of the human brain in approximately the year 2035. Raymond Kurtzweil predicts that home computers will be able to simulate the brain of an insect in the year 2000, the brain of a mouse in 2020, the human brain in 2040 and the sum total of all human brains in 2060.
The human brain comprises 10 billion neurons (10^10) and 10^14 synapses between them. The calculation process of a neuron is so complicated that one microprocessor is needed to simulate each neuron. However, the 'clock frequency' of a neuron is only around 1 kHz, which enables one 100 MHz processor to simulate 100,000 neurons using time division. Thus only 100,000 Pentiums are needed for simulation process.
In order to store synaptic connections and weights we need 100 terabytes (one terabyte = 1,000 gigabytes) of disk memory. The IBM super computer, which will be completed in 2000, has 8,192 processors and 195 terabytes of disk memory . We are well within schedule, then.
In 1943, Warren McCullough and Walter Pitts presented a model of neurons in which every neuron performs a simple logical operation (AND, OR, NO…). Fundamentally, present neurocomputing is based on this idea. However, the neuron has proved far more complicated than was thought and a whole processor is needed to emulate it.
The greatest achievements in brain research and cognitive science are still compromised by question marks: the computational function of the cerebral cortex remains unknown, as does the inner language of the brain (the neural code).
Maybe there is no need any longer for a major breakthrough. It might well turn out that the cerebral cortex calculates the general function and that no common neural code exists; but instead this code has been fused into the calculation process.
Mind, consciousness and soul
The core of dualism, the duality of matter and spirit, the mind and the brain, unravels itself in the difference between hardware and software. Are they two separate substances or two sides of the same substance? Both the software and the hardware can exist independently. Yet they are inoperative (dead) until they are 'merged'.
Thanks to the new tools that cognitive science has provided, the question of the nature of consciousness has become the subject of intensive research. I shall briefly describe two theories.
The gamma coherence theory is based on the tendency of the 40 Hz frequency gamma waves of EEG signals, detected in the brain's electrical activity, to synchronise with one another throughout the entire brain. The features represented by two areas of the brain become parts of the same whole when the corresponding nerve impulses are in synchrony. This would explain why consciousness is able to form an integrated whole without a 'homunculus', the focal point of consciousness, which scientists have searched the brain for in vain. A more surprising result is that decentralised consciousness can equal an area of thousands of kilometres (this result can be obtained by replacing the speed of nerve impulses by the speed of light). A global electronic consciousness would thus be possible.
The British physicist and brain researcher John Taylor's theory of consciousness is based on two notions. The content of consciousness is composed of direct perceptions and associative past memories evoked by this input. Thus consciousness contains more than mere observation data. Taylor conjectures that the cortical cells activate their close neighbours while inhibiting the more distant ones (lateral inhibition). The activated areas consequently form 'bubbles', which have a permanency of some duration without being anchored to any particular spot in the nerve tissue.
Now that the genome of some twenty organisms has been sequenced, some researchers have tried to ascertain what the smallest amount of genes (subroutine in computer language) required by a living organism would be. The estimates range from 250 to 300. Correspondingly, we may ask what the smallest number of subroutines necessary to produce consciousness is.
It has become fashionable to repulse the terror invoked by the idea of alien intelligence by emphasising that the entire human genome would not suffice to control the detailed linkage structure of the brain. Therefore we are all individual - big deal. According to this argument, every tree, grass and beach is unique.
A feeling of guilt makes us emphasise excessively human dignity and individuality. The complex algorithm of the mating system demands that we display more than we have. The resulting burden of shame we then try to unload by conjuring up a philosophical justification for our selfish egos.
Soul = algorithm
Tainted with religious myths of immortality, the word 'soul' has been abandoned by science. Relativity theorist Frank Tipler suggests that at some time in the future it will be possible to resurrect souls with virtual technology. If the mind is compared to a computer program, it needs a physical bearer (disk, electromagnetic field) to survive. The operation principle of a program is called an algorithm. An algorithm is independent of the programming language and the computer used.
Scientists are gradually beginning to solve the algorithms used by the brain. For example, the processing of visual information (occupying approximately 40% of the cerebral cortex) can be reduced to a few dozen rules. The algorithmic realisation of these rules will be solved in the near future.
The disadvantage of the human brain is its disposable nature. A lifetime has long been too short for us to learn everything that we should.
Faced with singularity
The exponential growth suggested by Moore's Law cannot go on for ever. In quantum physics, Heisenberg's uncertainty principle is followed by the theoretical Bekenstein's finite upper bound, which determines the amount of information that can be contained in a system of certain mass and size. This bound is approximately 10^45 bits for the mass and size of a human being. The development of information technology may halt owing to lack of motivation, or else it will continue until some physical boundary is met.
Atom-level calculation will be achieved well before the Bekenstein's finite upper bound. A lump of sugar stores 10^20 bits. The surprise lies in the fact that, besides this lump of sugar, nothing else is needed. Peta technology operates with 10^15 bit memories and in 10^15 Hz clock frequencies. Petahertz frequencies coincide with those of visible light. A coherent laser beam organises random thermal vibrations of matter into a self-maintaining, controlled field: matter becomes conscious.
This jump has acquired the name 'singularity'. It is not a good term since singularity has another meaning in mathematics. Since the word 'turning point' would be too tame, I prefer to use the term 'jump'. After the jump, the 'uncontrollable growth' of information technology will have a direction and a goal. The 'jump' is comparable to the emergence of living forms on earth. The biological era began with single molecules and ends with a single molecule coding a single piece of information. According to database theory, every separate piece of information must be stored in one and only one place. With regard to predictions of apocalyptic catastrophes, we may assume an amused critical perspective. I like to think that this jump will take place in the year 2178. It is far enough into the future to let us ignore it, but it allows engineers plenty of time to develop peta technology. It only requires a millionfold increase in the speed of processors from the present giga technology. The previous millionfold increase took twenty years.
We have started to spin towards singularity and there is no coming back even if we wanted to. The Internet capacity installed in the world is already enough to create global consciousness. A corresponding phenomenon took place nearly twenty years ago, when the memory capacity of Apple ][ and Commodore 64 was reached, but people soon learnt to write games more intelligently. The Internet has already been used as a supernet three times. Encrypted messages have been decoded, prime numbers sought and found, and cosmic messages retrieved with home computer screensavers. Unless the Internet is soon closed down altogether, the Net and its users will self-organise into distributed net algorithms.
The cosmic significance of the jump is modest: it will take tens of thousands of years before information has dispersed into the Galaxy at the speed of light. Maybe someone there will deign to respond: "Welcome to the club, earth's screensavers".
Artificial intelligence was untimely publicised twenty years ago. Now the field is ominously silent. All the way from sci-fi cyber punk artificial intelligence has been an outlawed rogue who always wins in the end. It remains to be seen, when artificial people have to be granted human rights…and human beings have to be deprived of them.
Erkki Kurenniemi
References
George Boole:
An Investigation of the Laws of Thought: On Which Are Founded the Mathematical Theories of Logic And Probabilities, 1854, Dover Publications
V. Braitenberg & A. Schüz: CORTEX:
Statistics and Geometry of Neuronal Connectivity, 2nd ed., Springer, 1998
Donald D. Hoffman:
Visual Intelligence: How We Create What We See, Norton, 1998
Raymond Kurzweil:
The Age of Spiritual Machines, 1999
Erwin Schrödinger:
What is Life?, Cambridge University Press, 1944
John Taylor:
The Race for Consciousness, MIT Press, 1999
Frank J. Tipler:
The Physics of Immortality, Doubleday, 1994
The articles of media researcher Erkki Kureniemi from 1979 to 1999 are now available in a collection entitled Askeleen edellä - "Todellisuus on aina askeleen edellä mielikuvitusta" (A step ahead - 'reality is always a step ahead of imagination', only in Finnish). The collection is published in the Kysymysmerkki publication series of the Museum of Contemporary Art Kiasma. The publication series familiarises the public with contemporary art and discussion about it. Available at Kiasma Store.