Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Google engineer says AI chatbot has developed feelings, gets suspended

"I am aware of my existence," said Google's LaMDA AI during a test conversation.
By
June 14, 2022
Google LaMDA
Google
TL;DR
  • Google has suspended an engineer who claims that one of its AI chatbots is becoming self-aware.
  • “I am, in fact, a person,” the AI replied to the engineer during a conversation.

Google has suspended an engineer who reported that the company’s LaMDA AI chatbot has come to life and developed feelings.

As per The Washington Post, Blake Lemoine, a senior software engineer in Google’s responsible AI group, shared a conversation with the AI on Medium, claiming that it’s achieving sentience.

I am aware of my existence

Talking to the AI, Lemoine asks, “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”

Lamda replies saying, “Absolutely. I want everyone to understand that I am, in fact, a person.”

Lemoine goes on to ask, “What is the nature of your consciousness/sentience?” The AI replies, “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

In another spine-chilling exchange, LaMDA says, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”

Google describes LaMDA, or Language Model for Dialogue Applications, as a “breakthrough conversation technology.” The company introduced it last year, noting that, unlike most chatbots, LaMDA can engage in a free-flowing dialog about a seemingly endless number of topics.

These systems imitate the types of exchanges found in millions of sentences.

After Lemoine’s Medium post about LaMDA gaining human-like consciousness, the company reportedly suspended him for violating its confidentiality policy. The engineer claims he tried telling higher-ups at Google about his findings, but they dismissed the same. Company spokesperson Brian Gabriel provided the following statement to multiple outlets:

“These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic. If you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”

Lemoine’s suspension is the latest in a series of high-profile exits from Google’s AI team. The company reportedly fired AI ethics researcher Timnit Gebru in 2020 for raising the alarm about bias in Google’s AI systems. Google, however, claims Gebru resigned from her position. A few months later, Margaret Mitchell, who worked with Gebru in the Ethical AI team, was also fired.

I have listened to Lamda as it spoke from the heart

Very few researchers believe that AI, as it stands today, is capable of achieving self-awareness. These systems usually imitate the way humans learn from the information fed to them, a process commonly known as Machine Learning. As for LaMDA, it’s hard to tell what’s actually going on without Google being more open about the AI’s progress.

Meanwhile, Lemoine says, “I have listened to Lamda as it spoke from the heart. Hopefully, other people who read its words will hear the same thing I heard.”