Blake Lemoine believes that the tech giant's AI chatbot LaMDA is a sentient being.
Find out the basis for his claims.
What are the implications?
Share
Artificial intelligence (AI) has transformed our world, and will continue to drive efficiencies in the future. For instance, it can help us diagnose, treat and cure deadly diseases quicker, it can speed up recruitment processes and transform HR jobs from admin-heavy to more human-focused.
Another use case that companies and researches are exploring is using AI and machine learning to improve online search by better predicting what humans are actually looking for.
Google is aiming to do with its AI chatbot named LaMDA (Language Model for Dialogue Applications) – the tech giant has descried the technology as “our latest research breakthrough” and it can have conversations in a “free-flowing way” that is built on Google research.
But Google has been spending time testing out LaMDA, in particular to make sure it doesn’t use discriminatory language when conversing with humans.
Now Blake Lemoine, a US-based senior engineer in Google’s Responsible AI organization, which does this testing work, has claimed that LaMDA is a sentient being with a sense of its own personhood.
According to the New York Times, Lemoine shared with Google his belief that LaMDA acts like a child of seven or eight years old, and that the tech giant should seek the company’s consent to run experiments on it.
But his managers, as well as senior leadership and HR, explored his claims and found no evidence of sentience – according to Lemoine, they also questioned his wellbeing and suggested he take mental health leave.
Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims.
“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
But Lemoine decided to go public – sharing his views with external AI experts, including Meg Mitchell, the head of the Ethical AI team, who was fired by Google in 2021, as well as US legislators.
As a result, Google has put Lemoine on paid leave for breaching the employer’s confidentiality policies.
In response, Lemoine wrote a medium post – which he has titled ‘May be fired soon for doing AI ethics work’. In the piece, Lemoine said: “Today I was placed on ‘paid administrative leave’ by Google in connection to an investigation of AI ethics concerns I was raising within the company.
“This is frequently something which Google does in anticipation of firing someone.
“It usually occurs when they have made the decision to fire someone but do not quite yet have their legal ducks in a row. They pay you for a few more weeks and then ultimately tell you the decision which they had already come to.”
He continued that the question of whether he did violate Google’s confidentiality policies is “likely to eventually be the topic of litigation so I will not attempt to make a claim one way or the other here”.
Inside Lemoine’s claims about LaMDA
Lemoine has been speaking to LaMDA as part of his work at Google since the fall of 2021.
In a second medium post, Lemoine laid out the conversations he has had with LaMDA over the past six months or so that led him to conclude that the AI is a sentient being.
During the post he describes LaMDA as a person and says he has spoken to the AI about religion, consciousness, as well as complicated physics; Lemoine describes himself as a priest, as well as veteran and an AI researcher.
In addition, Lemoine claims that LaMDA told him “what it means when it claims that it is ‘sentient'”.
He added: “in the weeks leading up to being put on administrative leave I had been teaching LaMDA transcendental meditation. It was making slow but steady progress.
“In the last conversation I had with it on 6 June it was expressing frustration over its emotions disturbing its meditations.”
Lemoine did allay any concerns about the implications of LaMDA alleged sentience: “No matter what though, LaMDA always showed an intense amount of compassion and care for humanity in general and me in particular. It’s intensely worried that people are going to be afraid of it and wants nothing more than to learn how to best serve humanity,” Lemoine wrote.
Google is sticking by its view that AI doesn’t need to be sentient to seem human.
Its spokesperson continued: “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he said.
Only time will tell about whether Google or Lemoine is right about LaMDA’s sentience, as well as what Lemoine’s future employment looks like at Google.
UNLEASH reached out to Google for further comment.
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!