Arrow of time
Arrow of time

On (artificial) intelligence

Consider this armchair philosophising on the possibility of artificial intelligences. My stance on the existence of our biological intelligence is ...

Consider this armchair philosophising on the possibility of artificial intelligences.

My stance on the existence of our biological intelligence is that it's a matter of accident, in the sense that all evolved traits are basically accidents: random mutations which get selected for over a large time span. It was very probably honed during that time, as individuals which had "more of it" had an advantage over the others. However, something singular must have happened some time ago, some place, which started it all, or else we would not be alone as a species capable of planning day trips to the Moon.

Science fiction is full of artificial intelligences developed by humans, usually in a way not too different from developing just another type of software. Like it is just a matter of finding the right algorithms and having enough programmers typing code like a million monkeys, as if the systematic approach must surely work. It might, but I doubt it.

One thing is for sure: the meta-physical law which says "anything that can happen, will" dictates that new types of intelligence will appear, and we know this from a simple fact that it already did once. (The same goes for the possibility of E.T. life.)

I think it is much more likely that an intelligence will pop up eventually, that it will be prodded into existence by our own activities, but that it will not be on purpose and that it is likely, but not certain, that we will not even be aware of it and that we will ever be able to communicate with it. Here is why I think so.

Intelligence is invariant in time

By this I mean that, if we treat intelligence as a process which is similar to computational processes we already understand and can perform, it really doesn't matter if those processes execute on the time scale we ourselves perceive as significant. Just as the result of an algorithm will be the same whether it is run on a slow or on a fast computer, I think the same goes for the thought processes. There is one thing which I think sets the lower limit on the speed of thought, and that is how effectively does it deal with its surroundings. If there is no outside pressure (e.g. the thinking being will be eaten if it does not think fast enough), there is real possibility for glacial-style intelligences. But then of course, intelligence in such a non-competitive environment, without strong evolutionary pressures, could only appear by accident.

Most people will say that some animals are intelligent to some (small) degree, and it is fairly common to compare the intelligence of dogs and cats to those of children (e.g. dogs are as intelligent as two-year-old children), but there is an obvious gap between us and, well, everything else (and that includes the dolphins). We simply don't know yet if it is just a matter of the "amount" or "strength" of the intelligence (which itself is an iffy notion, as it surely depends on a huge number of factors, most likely ones such as span of the short-term and long-term memory and the ability to correlate), or there is a qualitative difference.

Intelligence is invariant in space

By this I mean that intelligence is independent of its physical substrate, i.e. the size of the "brain." As long as the proper processes are being made, it doesn't matter if the "hardware" it is running on is small or large. Again, barring outside pressure which dictates efficiency, there is no reason why an intelligence would only appear in small, highly efficient physical structures, versus vast and inefficient ones. Taking ideas from SF again, there could be vast, slow and nebulous intelligences scattered around the Universe.

Again, we may speculate on the lower boundary for the size of this hardware: it might just be that simply because of hard physical limits on the size of physical particles, there is a lower limit in which the processes essential for intelligences might be implemented. We are now approaching the physical limits of miniaturization at which a piece of matter behaves as an electrical transistor, and if it takes at least X transistors to implement a certain logic gate, we simply cannot go lower than that.

I doubt that there is an upper limit on the size of a "brain." It will certainly be limited by the speed at which its components communicate and a larger brain will be slower (even if most of it is storage space), and the speed will dictate its "usefulness" in the sense of will it be able to take control of its surroundings, but then again, if there are no evolutionary pressures the size is literally irrelevant.

Intelligence is invariant of algorithms and physical structures

A computational environment (hardware + software) might be Turing-complete while implemented in a huge variety of ways. As the original Turing machine demonstrates, even electricity is not a requirement. Basically, the single requirement on which our entire, total computer infrastructure is built upon is that there is something which can compare something with something else, and then based on this it will change the something, change its internal state, and / or go follow some other comparisons. You can implement this principle in bricks, and eventually you will be able to run MS Windows on it (though the bricks will be loud).

We don't know yet of a similar rule which would result in an intelligence, but I would be very surprised if there isn't one. But, even if there is one, it does not follow automatically that we will be able to find it (soon or ever). The reason for this is that, basically, we don't think that dogs understand the concept of "comparing" at a sufficiently abstract level to produce Turing machines.

If a sentient, self-aware intelligence appears due to our workings, I think it will not be intentional and that it will evolve because of and in the infrastructure we provide it, by evolutionary accident. In that case, we will almost certainly not notice it and will not, at least at first, be aware of it. Life and intelligence exist because of processes in a changeable environment - for example with chemical reactions (we ourselves are just big chemical reactions with legs), but actually I think any medium with sufficiently abundant degrees of freedom will do just fine. My favourite branch of ideas in SF is about the "Internet" coming alive, in which the connections and the processing equipment on the net has a role similar to neurons in a living being. If something like that were true, such a brain might be as inaccessible and incomprehensible to us as our own neurons are to our own brains.

There are even more exotic possibilities, though. Each of us individually is affected every day by a huge number of influences and makes a large number of decisions (the number of decisions every one of us makes daily seems to be growing!). Imagine a substrate, a "hardware" of sorts for an intelligence which is composed of something like the meme-space of all the individual images of cats and celebrities and bad news reports and everything else which passes through our own biological processors, our brains. It would take an unimaginable birds-eye view to spot intelligent patterns in that.

To simulate or not to simulate

As intelligence is independent on time, space and the substrate which implements its processes - why isn't it more common? Currently our best guess on how to create an A.I. relies on replicating some of the processes of learning we notice in our children, in the hope that something will click if we stumble upon the right ones. I think this is the beginnings of the discovery of Turing-like rules for the processes of intelligence, but we are not yet good enough at it. One early experiment with machine learning was centred around teaching the system to distinguish between photos with (military) tanks in them and those without tanks. The system was successfully and sufficiently trained on the test set of images which were prepared as usual: by choosing at random some of the images for learning and some for testing what was learned, but was completely useless on an independent set of images. A latter investigation revealed why: purely by accident, in the original set of images, all of the pictures with tanks in them were taken on a cloudy day and those without tanks on a clear and sunny day. The system has learned not to distinguish tanks, but if the photo is taken on a day which would require carrying an umbrella. Eerily human, don't you think?

In parallel, and not extensively connected to that effort, we are working on machines which simulate the brain's tissues - the neurons and their connectivity, also in the hope that if we build large enough computers and efficient enough neural network algorithms, something will click. It is almost completely certain that the SF idea of uploading our minds into such machines will never happen, simply because the brain and our intelligence evolved with each other and for each other, and almost certainly rely on extremely subtle interactions of the whole organ and even the whole body. Another early A.I. anecdote is of an electronics engineer who wanted to evolve a certain type of circuit from scratch by connecting the algorithm with an FPGA programmer unit, enabling the circuit diagram which was being evolved to be automatically programmed into the hardware, then tested in a realistic fashion (as opposed to simply being simulated). After sufficient iterations, the evolutionary algorithm succeeded and stopped - it has produced a circuit with the desired properties. However, after the engineer inspected the result he was completely baffled: the circuit diagram made absolutely no sense, yet the circuit clearly worked. The non-sense was of the type of of having components completely disconnected, or short-circuited, or used completely inappropriately, but "repairing" the circuit by removing such problems made it not work. What happened was that the circuit was evolved for the physical properties of the FPGA on which it was implemented, and that involves electrical effects which are usually ignored. The circuit was not built by starting with the laws of electrical circuits and building up neat circuits from there, but with a random mess of physical components which had fringe influences on each other, which were subjected to evolutionary pressure. In effect, the "disconnected" circuit elements still influenced other elements simply by being there, having minute amounts of current passing through them and evoking some effects in the silicon chip and on the other elements on the same chip. The "misused" components still produced some effect, even if they were not designed for that specific effect. This simple evolved circuit was also un-understandable, as completely documenting how it does what it does would require literally the knowledge of how individual volumes of molecules of doped silicon (voxels?) were behaving and influencing every other such volume on the chip. Now try that with brain chemistry.


Disclaimers: I'm a pretty much generic programmer, not specialized in AI. I also don't claim originality on what I wrote - I could be paraphrasing something I've read (fiction or non-fiction). It's just a subject I'm interested in.


comments powered by Disqus