Wednesday, 3 October 2012

Could a robot be 'human'?

Stephen Law, on his blogsite, has recently posted a fictional conversation between 'Kimberley', a human being, and 'Emit' - an artificial intelligence robot that she has just bought. See 'Kimberley & Emit' It's intended to raise the question of whether an artificial intelligence actually 'understands' in any meaningful sense of the word. But I had a problem with the discussion as presented. What follows is an expanded version of my comment, offering my understanding of 'self', 'instincts' and other aspects of 'human nature':

I have a difficulty that means I can't get started with this : there's a major component missing from the story.

From the moment Kimberley's brain started to form in her mother's womb it was logging and attempting to process sensory data. From the moment of her birth (possibly before), that data included feedback data from the world around her -- some of it in the form of reactions stimulated by her own actions (although her brain had to gradually make that connection). As she grew towards adulthood her brain was constructing a mental model of the universe out of sensory data, feedback data and remembered analyses of this data. She (like all of us) is actually living in her mental model of the Universe (although the Universe itself continually challenges it).

Simultaneously, within that model, was emerging a sense of her 'self' as an actor in this process -- this mysterious component that is analyzing and reflecting on the data as it tries to fit it into the provisional, already-existing, constructed, model. That sort of 'self'-awareness seems to be unique to the human species and may be connected to the syntactic language capability humans have. The older she gets (depending on the accuracy, flexibility and openness to conflicting data of the model her brain has built) the harder it may become to fit new data or new analyses (which can also be learned, via language, from other human 'selves') into her model universe. Her brain may (even below the level of her 'self'-consciousness) end up bending new 'facts' to fit the model. The existence of Emit may be part of that!

We are not told, however, how Emit has been pre-programmed. There is a suggestion that the whole process of modelling a universe from scratch has been bypassed and short-circuited by the programming-in of someone else's idea of the 'correct' responses to various stimuli. The responses are not being determined by an Emit 'self'. These responses seem rather equivalent to what we would think of as 'instincts' -- pre-programmed responses to certain physical stimuli, which humans and all other animals also have and which (in the case of animals) are the prime determinant of their responses, since so far as we know animals largely lack this 'self'-awareness capability which enables the over-riding of instinctive responses to some extent (although not the instincts themselves).

Assuming Emit has the physical capability within his brain structure to develop a sense of 'self', has he had time to evolve any sense of 'self' as an analytical agent? It sounds very unlikely, and it couldn't be implanted from the beginning. (If my analysis is correct, 'self' is in the beginning only a potentiality : the self itself has to be gradually discovered through progressively sophisticated interaction with the world). Newly-constructed, he might be able to begin the process of 'self'-discovery, but it would be utterly confusing given that his pre-programmed responses would deny that 'self' any agency. (Unless he'd been programmed to have the capacity to reprogram his own 'operating system'). He wouldn't be able to refine his responses through trial-and-error. It would I imagine be akin to gradually waking up into a nightmare, perpetually at war with his 'instincts'. Maybe that is a picture of what a human baby is faced with, but maybe also the lack of a strong sense of self is what makes it bearable. If Emit, on the other hand, had been created with the capability of reprogramming his own 'instinctual' operating system it's almost impossible to predict the consequences -- he has the potential to become anything or (more likely, I would have thought) completely dysfunctional. The equivalent of the 'blue screen of death'.

Hypothetically, though, the possibility must remain of an artificial brain with the same physical capability as the human brain to not only gather sensory and feedback data, logging and analysing it all, but also with syntactic linguistic ability and (possibly related) ability to develop a sense of its 'self' living in an objective universe. Assuming it did not share the same organic processes as the human body (it would probably be powered differently, might lack physical pain detection and other feedback mechanisms within its own body, might have different physical capabilities for interaction with the world, etc) it would be a different sort of being.

I wonder, though . . . would it be able to fall in love?