LaMDA IQ debate swirls over who (or what) is tone deaf: update

This article is updated with comments* from Dr. Kate Darling on June 15.

The debate over whether LaMDA is computing sentient has provoked heated online vitriole and, apparently, the paid suspension of a Google engineer partly behind the controversy.

Google has said its systems imitate conversational exchanges but don’t have consciousness. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Google said in a statement.

The suspended software engineer, Blake Lemoine, told The New York Times he had been put on leave after the company’s H.R department found he had violated Google’s confidentiality policy. 

Apparently Lemoine had argued for months that LaMDA, or Language Model for Dialogue Applications, had a consciousness and even a soul, although many at Google disagreed.

That view caught fire recently when The Economist ran a guest article by Blaise Aguera y Arcas, a Google VP, saying of LaMDA, “I felt the ground shift under my feet…increasingly felt like I was talking to something intelligent.”

The Economist article, however, was careful to hedge somewhat on the central question, using the headline, “Artificial neural networks are making strides towards consciousness…” In that headline,“making strides” offers a different tone than reaching actual consciousness. 

Much of the reaction to Aguera y Arcas’ view has been harsh, however. “We in the AI community have our differences, but pretty much all of [us] find the notion that LaMDA might be sentient completely ridiculous,” wrote Gary Marcus in “The Road to AI We Can Trust.”

Marcus quoted Stanford Economist Erik Brynjollfsson’s tweet: “To claim [foundation models] are sentient is the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside.”

The widespread perception of AI experts is that sentient AI is far, far off in the future.  Robot ethicists such as Dr. Kate Darling have questioned whether a robot could be trusted, for example, to care for an elderly person, at least on a consistent, long-term basis. There’s no question that robots perform many jobs in factory assembly lines today, including welding metal parts together where a repetitious pattern is followed.

But Darling in a recent podcast noted that, so far, robots have not taken control of an entire assembly line. An article she penned recently was further entitled,’ “It’s time to accept AI will never think like a human—and that’s okay.”

RELATED: Dr, Kate Darling to keynote on robotics at Sensors Converge

Of the LaMDA controversy, Darling tweeted in a classic bit of irony about true intelligence that she had not read The Economist article because she was on a beach trip with her family.

*She did follow up on that tweet about her beach trip with another tweet on June 15 apparently referring to the LaMDA AI and saying "The AI is not sentient." It added: "Of course people think it's sentient."  She also said she has a fuller explanation in her book, "The New Breed."

Editor’s Note: Dr. Kate Darling will keynote at Sensors Converge on June 28 on the topic, “The Future of Human-Robot Interaction.”