Google Fires Engineer Who Claimed AI Has Come to Life – Should You Be Worried?

Lemoine believes that LaMDA is a sentient being with the cognitive abilities of a child.

Viraj Gaur
Tech News
Updated:
<div class="paragraphs"><p><a href="https://www.thequint.com/topic/google">Google</a> engineer Blake Lemoine, who often interacted with the company’s chatbot development system LaMDA (language model for dialogue applications), thinks that the <a href="https://www.thequint.com/topic/artificial-intelligence">artificial intelligence</a> (AI) has come to life.</p></div>
i

Google engineer Blake Lemoine, who often interacted with the company’s chatbot development system LaMDA (language model for dialogue applications), thinks that the artificial intelligence (AI) has come to life.

(Photo: The Quint)

advertisement

Video editor: Veeru Krishnan Mohan

Cameraperson: Gautam Sharma

(This video was originally published on 1 July and has been republished in light of Google firing Blake Lemoine, the engineer who claimed that the company's AI has become sentient.)

Google has fired senior software engineer Blake Lemoine, who often interacted with the chatbot development system LaMDA (language model for dialogue applications) and claimed that the artificial intelligence (AI) had come to life.

The company said that Lemoine, who was put on "paid administrative leave" last month, had violated confidentiality policies and that his claims were "wholly unfounded".

"It's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," a Google spokesperson told Reuters.

In a blog post, Lemoine revealed that he took "a minimal amount of outside consultation" to gather the evidence he needed and shared his findings with Google's executives in a document titled "Is LaMDA sentient?"

‘Sweet Kid Who Wants To Help the World'

Lemoine believes that LaMDA is a sentient being with the cognitive abilities of a child, while Google insists that the evidence is stacked heavily against his claims.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.

The engineer, whose job involved testing whether the artificial intelligence used discriminatory or hate speech, chatted with the AI about its own rights and personhood.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is."
LaMDA to Lemoine

“I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times," LaMDA told Lemoine.

Before he was locked out of accessing his work account, Lemoine sent a message to a 200-person Google mailing list, according to The Washington Post.

“LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” he wrote.

ADVERTISEMENT
ADVERTISEMENT

AI Isn’t Alive Yet

In computer science, the term 'ELIZA effect' refers to the tendency of human beings to unconsciously assume computer behaviours are comparable to human behaviours.

Erik Brynjolfsson is an AI scientist at Stanford. When he saw what Blake Lemoine was saying, he posted a really interesting image on Twitter along with these words:

“To claim (that language models) are sentient is the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside.”

Neural networks like LaMDA produce realistic speech because they are designed to recognise patterns and are fed with millions of human monologues and conversations on the internet – Twitter, Wikipedia, Reddit, message boards etc.

"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic," said Gabriel.

Steven Pinker, a Cognitive scientist at Harvard, wrote on Twitter that there is no evidence that Google's large language models have sentience, intelligence, or self-knowledge.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” Emily M Bender, a linguistics professor at the University of Washington, told The Post.

Impersonating Humans

Margaret Mitchell, the former co-lead of Ethical AI at Google, was one of the people who helped Lemoine gather evidence for his claim that LaMDA is sentient.

She, however, doesn't believe that LaMDA is anything more than a computer program. Her problem with such language models is that they are convincing enough to fool people into believing they are talking to a real person, which can amount to an invasion of privacy.

“I’m really concerned about what it means for people to increasingly be affected by the illusion, especially now that the illusion has gotten so good."
Margaret Mitchell to The Washington Post

"If something like LaMDA is widely available, but not understood, It can be deeply harmful to people understanding what they’re experiencing on the internet,” she told the publication.

Google has already faced backlash over its voice assistant sounding like a human being, and has made commitments to include disclaimers to let people know they aren't talking to a human being.

(With inputs from The Washington Post and Buzzfeed News)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Published: 13 Jun 2022,03:42 PM IST

ADVERTISEMENT
SCROLL FOR NEXT