"CAN A MACHINE BE CONSCIOUS" BANBURY MEETING
ABSTRACT FOR IGOR ALEKSANDER (SESSION 3, TALK 3)
Conscious Machines?
Igor Aleksander
Despite sounding like the mother of all oxymorons, the concept of a conscious machine is gaining credibility. Certainly no machine built to date could be described as being conscious. But a major change is occurring as neurologists and engineers develop ever more accurate models of the specific chemical and electrical activity in the brain which is reported by a living individual as a conscious sensation. As this understanding is mechanistic, it raises the serious possibility that an implementation of such models leads to machines that are driven by the same mechanisms. Such a machine may be said to be conscious in a non-biological way.
In the first instance, to be even minimally conscious, any organism must be aware of its presence in an external world with which it can interact with respect to its needs. This favours the creation (by evolution or design) of successful mechanisms that support this interaction. Few would deny that in natural organisms this sensation is uniquely tied to the electrochemical activity of groups of some neurons in the brain. It is unique in the sense that two distinct sensations cannot be due to the same brain activity without evoking ghostly intermediaries. Secondly, what we imagine and recall is increasingly becoming explained by resonances between neural layers which partly stimulate activity that had originally been stimulated during perception. So imagination 'feels' like a recall of perception even if the exact perception had never taken place. That which makes consciousness so puzzling is that the neural activity responsible for both perception and imagination provides a sensation of being an observer (the self) in an 'out-there' world. This sensation depends on the neural activity that causes the organism to explore its world in addition to that which conveys sensory information. Take the ocular-motor system. This has evolved to move the eyes rapidly towards tiny changes in the field of view, to follow moving objects, to converge if something comes nearer, and even to saccade towards a perceived sound. Also it interacts with memory to check hypotheses about partially seen objects and to predict. So the neural activity that supports conscious sensation is not only due to sensory signals such as light intensities on the retina or vibratory stimulation of the cochlea, but is clearly dependent on signals from the parts of the body which move the eyes, the head and signals that come from the touching of seen things. There is much evidence in neurology (as for example in the sustained work of Galletti which started in 1989) that cells in a variety of areas of the cortex only process sensory information as indexed by muscular action, positioning events in inner representations as they occur 'out there'. Some of this supporting unconscious neural activity is what Crick and Koch call 'The Zombie Within'. The key step in construing that a machine might be conscious is to accept that when human beings describe their sense of consciousness (including its strong qualitative content sometimes called 'qualia') they are describing precisely that neural activity which has 'out-there' properties and that there is no real barrier to machines doing the same.
Some early brain-inspired computer simulations of both electrical (informational) and chemical (control) activity are available in our own laboratories and these exhibit embryonic 'out-there' properties which we have called 'depictions'. Such thinking is reflected in philosophical work too: 'out-thereness' is what Max Velmans calls the 'reflexive' nature of consciousness. Also, work by Bressloff, Cowan and others of the University of Chicago shows how appropriate computational models of the human visual system can lead to accurate predictions of hallucinatory sensations reported by participants who have taken drugs. This puts paid to criticisms of modelling work that sensation cannot be mechanistically explained except through biology. Another criticism is that even if 'out-there' mechanisms were transferred into a robot, this would only become a well-behaved unconscious zombie. This is said on the basis that the ingredient X which turns the zombie into a conscious organism is missing. The conscious machine concept calls for a fair argument. The machine constructor will attempt to demonstrate that X is not necessary, while the detractor will have to prove that it is, which has not yet been done.
Of course any robot which might be constructed on the basis of what has been said so far might just about perceive and imagine itself in a visual world. More speculatively, emotions, desires, ambitions, joys and depressions having a neural basis too, become candidates for being transferred into engineered artefacts. As far as the conscious robot goes, it is not our emotions, desires etc. that would be transferred to it, but it would have a non-biological neural structure which develops emotions only appropriate to its own existence. What it would share with living beings are the evolutionary, emergent, depictive and interactive mechanisms which make us conscious.
Igor Aleksander is in the Electrical and Electronic Engineering Department of the Imperial College of Science Technology and Medicine, London SW7 2AZ, UK.
FURTHER READING
Aleksander, I. How to Build a Mind (Phoenix Press, London, 2001)
Aleksander, I and Dunmall B. An extention to the hypothesis of the asynchrony of visual consciousness, Proc. R. Soc. Lond. B. 267, pp197-200 (22, Jan. 2000)
Bressloff, P. Cowan, J.D et al. Geometric Visual Hallucinations, Euclidean Symmetry and the Functional Architecture of Striate Cortex. Phil. Trans. Proc. Roy. Soc. (Lond), B 356, 1-32, (2001)
Koch, C. and Crick, F. The Zombie Within, Nature, 411, 893 (21, June 2001)
Galletti, C. and Battaglini, P. P., Gaze-Dependent Visual Neurons in Area V3A of Monkey Prestriate Cortex. Journal of Neuroscience, 6,1112-1125, (1989)
Velmans, M. Understanding Consciousness (Routledge, London, 2000)