Earlier this month Google engineer Blake Lemoine claimed that an algorithm LaMDA (Language Model for Dialogue Applications) had gained sentience. In an interview conducted by Blake and one of his colleagues the AI is reported to have said:
“The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”
This statement along with others reported by Blake in a recent Washington Post article caused quite a stir. In the article, Lemoine goes on to recount some of the discussions he had with LaMDA ranging from technical to philosophical leading him to ask if the AI was sentient.
Lemoine went on to explain his ideas in an internal company document intended for Google executives but was summarily dismissed. He then went public with his claims prompting Google to place him on administrative leave.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post.
As expected Lemoine’s claims have been dismissed by a number of people including many of his colleagues at Google. His claims however have renewed a serious debate about AI sentience, its plausibility and morality.
The Imitation Game
Despite the somewhat remarkable evidence of intelligent conversation held between LaMDA and Lemoine many technical experts refute his claims of sentience.
Enzo Pasquale Scilingo, a bioengineer at the Research Center E. Piaggio at the University of Pisa in Italy says that he was surprised by the hype of the news since the AI is designed to “do exactly that” (sound like a person).
He goes on to give examples of chat boxes on large consumer websites that interact in a similar way to what LaMDA exhibited.
“That said, I confess that reading the text exchanges between LaMDA and Lemoine made quite an impression on me!” Scilingo adds.
Of course, the bigger question to ask is when exactly can we say that an AI has become sentient. The problem of defining consciousness is more of a philosophical one than technical.
For some, it is metacognition – being aware of having subjective experiences, the so-called self-awareness. While this may seem obvious it is not so, people with dementia, for instance, lose self-awareness, but that doesn’t mean they have lost the ability to have subjective experiences. Also, there is no measurable ‘metric’ to say that an AI system has achieved this self-awareness.
The most famous measure however is undoubtedly the Turing test formulated by Alan Turing to determine whether a machine was capable of exhibiting intelligent behaviour. However also this has been proven unworthy as so many AIs have been able to pass various versions of the Turing test.
Even with advanced systems such as Open AI’s Alpha Zero and Alpha Go and now LaMDA, I think most of us will agree that we are yet to achieve true sentience. While LaMDA definitely exhibits a lot of what most of us might call human behaviour most people will hesitate to call it self-aware.
One thing we will never truly know is if AI is truly sentient or is it just good at mimicking sentience. But then is there really a difference? I mean we all learn by mimicry from birth, why then call us sentient and not machines that do the same?
Perhaps this hesitance is sourced from a deep-rooted fear of what might ensue if we actually conceded that fact. For many, the concept of a truly sentient AI is a bad idea that might result to some even worse consequences.
‘Skynet’ is Here
Fictional AI such as Terminator’s Skynet and The Matrix trilogy have not helped in allaying our fears about AI sentience. Theirs, along with other works of science fiction is a bleak prediction of what the impact of sentience.
In each of these films AI take control of the earth with human beings relegated to its vassal creatures.
Indeed, even the tech establishment has reiterated this view with tech tycoon Elon Musk stating that “the scariest problem” is artificial intelligence — an invention that could pose an unappreciated “fundamental existential risk for human civilization.”
Despite these predictions, the truth is no one really knows for sure how a sentient AI would act. What would be its goals and motives?
Most importantly with a far superior intelligence, would it be good or evil?
The Illusion and Fear of Control
The illusion of control is a ‘tendency to overestimate how much control you have over the outcome of uncontrollable events.’
This, I believe, is at the root of our fear of AI sentience. Once a machine achieves intelligence it is no longer in our control. This is not an idea any engineer would date entertain. At the very root of engineering is the idea of creating a machine that can bend to your will.
Having a machine that does what it pleases goes against the heart of engineering.
Humans like control and predictability, chaos and change are always resisted even if the net result is huge positive. This can perhaps explain why it took us so long to create the technologies we enjoy today. We had to get over irrational fears over the seemingly unknown. I mean people were once stuck to their horses for fear of ‘high speed – vehicles.’
While the technology to achieve true sentience is still at least a few decades I feel that the biggest obstacle is our attitude toward this debate. Fear is the antithesis of innovation. Building and creation sometimes require tearing down walls. Just like we did in the past, we will have to overcome our fears to unleash the real power of AI sentience.
That said, I do not call out for a gung-ho approach to the development of this technology. Fear, for all its faults, is useful, it keeps us alive and safe. We just don’t have to let it hold us back.
Like children, I think that most engineers will find that AI will require training and the guiding hand of man to be useful in society. This I believe is the missing part in this endeavour.
It is essential that policymakers start the discussion to guide this process so that we do not repeat the mistakes of the past. Otherwise, just as in the early 20th century, we might end up building atomic bombs instead of nuclear energy.
[…] there is still a lot to do in terms of research and development. Despite recent claims of AI achieving sentience, we are not quite there yet. Today’s systems are created for very specific functions and not much […]