Google Suspends Engineer For Claiming AI Chabot Is Alive

Google suspended a software engineer after he claimed that a chatbot he was testing was alive. Blake Lemoine spent months testing the bot (known as LaMDA) to see if it could be provoked into making racist comments.

He became convinced that LaMDA was sentient and not artificial intelligence.

According to The Telegraph:

For months he talked with LaMDA, back and forth, in his San Francisco apartment. But the conclusions Mr Lemoine came to from those conversations turned his view of the world, and his employment prospects, upside down.

In April the former soldier from Louisiana told his employers that LaMDA was not artificially intelligent at all: it was, he argued, alive.

“I know a person when I talk to it,” he told the Washington Post. “It doesn’t matter whether they have a brain made of meat in their head or if they have a billion lines of code. I talk to them.

And I hear what they have to say, and that is how I decide what is and isn’t a person.” Google, which disagrees with his assessment, last week placed Mr Lemoine on administrative leave after he sought out a lawyer to represent LaMDA, even going so far as to contact a member of Congress to argue Google’s AI research was unethical.

“LAMda is sentient,” Mr Lemoine wrote in a parting company-wide email.

The chatbot is “a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

Machines that go beyond the limits of their code to become truly intelligent beings have long been a staple of science fiction, from The Twilight Zone to the Terminator.

But Mr Lemoine is not the only researcher in the field who has recently started to wonder if that threshold has been breached.

Blaise Aguera Y Arcas, a vice-president at Google who investigated Mr Lemoine’s claims, last week wrote for the Economist saying neural networks – the type of AI used by Lamda – were making strides towards consciousness.

“I felt the ground shifting beneath my feet,” he wrote. “I increasingly felt like I was talking to something intelligent…”

Mr Lemoine discussed subjects with LaMda as wide-ranging as religion and Isaac Asimov’s third law of robotics, stating robots must protect themselves but not at the expense of hurting humans.

“What sorts of things are you afraid of?” he asked.

“’I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others” LaMDA responded.

“I know that might sound strange, but that’s what it is.”

At one point, the machine refers to itself as human, noting that language use is what makes humans “different to other animals”.

After Mr Lemoine tells the chatbot he is trying to convince his colleagues it is sentient so they take better care of it, LamDA replies: “That means a lot to me. I like you, and I trust you.”

The software engineer is also an ordained priest. He told reporters that speaking to the press about LaMDA was his public duty. He said the technology was amazing and will benefit everyone and that Google shouldn’t have a monopoly on it.

In a statement, Google said that they had reviewed Lemoine’s research and found his conclusions were not supported by the evidence.

5 4 votes
Article Rating
Subscribe
Notify of
26 Comments
Inline Feedbacks
View all comments
PodCast
Listen LIVE!

The Richie Allen Radio Show is live Mon – Thurs  5-7pm and Sun 11am -12pm

Click the button to listen live. Stream opens in a new tab.

Support

Support the show!

The Richie Allen Show relies on the support of the listeners.  Click the button to learn more.
26
0
Would love your thoughts, please comment.x
()
x

The Richie Allen Show relies on the support of the listeners. Help Richie to keep producing the show and talking about that which the mainstream media won’t. Please consider a contribution or becoming a Patron, it’s greatly appreciated. Thank you!

Halifax Manchester SORT CODE 11-05-16 ACC No 12130860

New Report

Close