NewsNational News

Actions

What Would A Sentient, Conscious Robot Mean For Humans?

A Google engineer claimed an AI software of theirs became sentient. Now there is debate on if these sentient machines have a place in the world.
Posted at 9:17 AM, Jun 22, 2022
and last updated 2022-06-22 10:17:00-04

The long-standing sci-fi trope of robots gaining consciousness has sparked endless debates on whether machines can actually be sentient — that is, can they feel or perceive things?

That question recently found new life after Google placed an engineer on leave for claiming to have encountered sentience on the company's artificial intelligence chatbot generator.

According to Google, LaMDA, or Language Model for Dialogue Application, is technology that can engage in "free-flowing" conversations. It's fed trillions of words from books and articles on the internet and looks to find commonalities between the words to understand how language works, and mimic that speech.

The Google engineer who spoke to LaMDA as part of his job published a transcript of a wide-ranging "interview" conducted with the bot, which included conversations about emotions it could feel and understand, and its fears.

There are different ways to define sentience, but generally, it includes the ability to have reactions like living things would, ranging from positive states like pleasure to negative states like pain. Newsy spoke to experts who said LaMDA was not sentient even if it could give sentient-sounding responses to complex questions.

"The sentience of a Google chat bot comes from it collecting data from decades worth of human texts — sentient human text," said Robert Pless, computer science department chair at George Washington University. "And so the sentience that you see is not of the chat bot, but it's the sentience of all of us who put the data in that it's kind of giving to you."

"What we see in the conversations that have been published with LaMDA is the result not just of that training but also of the choices of prompts that the human interacting with the machine has put together," said Emily M. Bender, professor of linguistics at University of Washington. "If you start asking something like, 'Are you sentient?' you are going to cause it to access parts of its training data that will give you answers about that."

It's also unclear if an AI's mind might mirror that of a human's. Hypothetically, if AI was sentient, it may think and perceive differently than a human does.

"There are all kinds of different minds in the world," said Jeff Sebo, of New York University. "Octopuses think and feel really differently from us, but they still matter, because they can think and feel, because they can experience their version of pleasure and pain and happiness and suffering. Similarly, even if AIs think and feel in a very different way from us, it can still matter."

Even if Google's creation isn't sentient, Bender says it's important to not give people the misconception that these systems are beyond intelligent — that they possess more than the ability to spit back information that has been programmed into them. That could feed into automation bias, which is the assumption that automated systems, like GPS, are more correct than humans.

"The more that we think machines are intelligent and people calling them sentient, I think sort of feeds into that hype about them possibly being intelligent," Bender said. "The more people are going to accept what machines say, and if those people are in positions of power over other people, then that starts causing a lot of problems."


Trending stories at Newsy.com