Artificial Intelligence? What is the Meaning of Life?

John Harrison

According to Siri:
– I don’t know. But I think there’s an app for that.

What is the purpose of demonstrations?

– I can’t answer that right now, but give me some very long time to write a play in which nothing happens.

(from www.sirifunny.com)

Recent advances in computer and neurosciences seem to indicate that a major breakthrough could be made in the coming decade in the field of enabling machines to think. At least that is what the boffins tell us, but not everybody agrees. Some advocate that thought is based on experience: “No man’s knowledge here can go beyond his experience.” — John Locke. Others think that consciousness and thought exists within the realms of behaviourism, and can be learnt: “The real question is not whether machines think but whether men do.” — B. F. Skinner.

In a recent radio programme on the subject of artificial intelligence broadcast by Voice of Russia, Professor Barkovsky, Doctor of Physics and Mathematics, head of the Neuroinformatics Center of Optical Neural Technologies Research Institute for System Analysis in the Russian Academy of Sciences maintained that, although we do not know how the brain works, theoretically it is possible to create artificial intelligence in a machine. Dr. Bishop, Professor of Cognitive Computing at Goldsmiths, University of London, said that he is very doubtful whether we will ever create a machine that can genuinely understand, let alone a computer-controlled robot that would ever be conscious.

Professor Bishop referred to four papers: “I am looking at the work of the American philosopher, Hubert Dreyfus, who published a book in 1972 called ‘What Computers Can’t Do.’ He identified two main problems; one is the ‘frame’ problem, that is how to identify what is important about a problem without doing an exhaustive search, and the other is implementing common sense and reasoning. His critique was based on the work of Martin Heidegger. In 1980, the philosopher, John Searle, published an article called ‘The Chinese Room Argument,’ whereby a person in a room in which there are boxes of Chinese characters is unable to understand anything because he has no way of referring to the real experience that the characters should represent. In 1990 the Oxford mathematician, Robert Penrose, put new life into an argument by the 1960s philosopher, John Lucas, that there are certain aspects of mathematical understanding that we ourselves do not understand.” Professor Bishop himself published a paper in 2002 called ‘Dancing With Pixies,’ where he tried to show that if a computer instantiates consciousness as it executes a computer programme, then consciousness is to be found everywhere, and therefore the argument is a straw man. Professor Barkovsky accused professor Bishop of being a mathematician and therefore not being able to perceive the bigger picture.

Professor Barkovsky has a team about to undertake a programme in ‘brain reversal,’ (something like reverse engineering) under the ‘2045’ project to build a holographic avatar of a human being that can continue to live after the person who it is modelled after has died. He argues that the whole issue of whether machines have consciousness or not is beside the point, but that teaching computers to understand humans is the key. Unfortunately, as professor Barkovsky himself admits, “no computers exist right now which are able to understand humans, but we hope, within ten years to solve this problem.” Professor Bishop pointed out that in 1950 the British mathematician, Alan Turin, published a paper called ‘Computing Machinery and Intelligence.’ In that paper he outlined the ‘Turin Test’ for machine intelligence, whereby a machine is deemed intelligent if the general educated opinion is able to talk of machine thinking without fear of contradiction. Turin predicted that by the year 2000 that would have happened. We are still waiting. Professor Barkovsky claimed that serious steps have already been taken, such as the IBM ‘Watson’ artificial intelligence programme, but even here it is clear that, although Watson has been able to beat Brad Rutter, the all-time winner on the American quiz show Jeopardy, nobody would say that Watson is capable of thought, something that Hubert Dreyfus would have found difficult to believe in the 1970s.

As computing power increases, computer programmes such as Watson and Siri are able to create answers to questions by searching through their data bases, and create the impression that they are doing something intelligent, but still nobody can say that an iPhone can think. This kind of computer power has useful implications for robotics, something that governments the world over are very aware of. Defense and robotics are enjoying a love affair. Projects like the US ‘The Big Dog’ project, which has to be seen (on YouTube), to be believed, are to continue. The Honda Asimo Android project is big in Japan and robotics is in vogue. But what is missing, as Professor Bishop points out, is the artificial technology controlling function, “which everybody thought would be really easy in the 1950s and 1960s.” From this point of view, the ‘2045’ project is reminiscent of bad science fiction stories, and is predicated first and foremost on the creation of a computer programme that will bring forth consciousness. “It might seem, when you are in a forest that to climb up a tree is a good idea if you want to get to the moon. But that involved a totally different technology. It seems to me we need a new approach, such as that coming from new cognitive science, in particular the ‘embodied enactive’ approach that is coming forward from modern European philosophy.” In the meantime, we can wait for the next app.