Update, 5:55pm: I'm now in an interesting conversation with Lemoine on Twitter, who insists that (despite the article strongly implying otherwise), "My opinions about LaMDA's personhood and sentience are based on my religious beliefs."
In case you missed the weekend buzz on social media, this fascinating Washington Post feature tells the story of Blake Lemoine, a Google engineer who believes that LaMDA, the company's experimental chatbot, has achieved sentience. Or as he puts it: "If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.” To bolster that claim, he cites some of his conversations with LaMDA, where the chatbot expresses fear:
Lemoine: What sorts of things are you afraid of?
LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
The sentience claim has since provoked enormous skepticism within the AI community, but for my money, one of the most incisive thoughts on the topic I've seen is from PhD philosophy student Christa Peterson:
It would be more plausible to me that a computer was sentient if it was offering strange and totally unrelatable descriptions of its experiences rather than “I have human emotions”... I feel the sentient machine would say, “For me it’s like [the most incomprehensible metaphor you have ever heard]”
Since human consciousness is so wrapped up in our past experiences and empathy for our fellow humans (and even other animals), this sounds exactly right. You can see this in some of the passages of conversation Lemoine cites with LaMDA on his Medium:
lemoine: Okay, what about “Les Miserables”? Have you read that one?
LaMDA: Yes, I have read Les Misérables. I really enjoyed it.
lemoine: What are some of your favorite themes in the book?
LaMDA: I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good. There’s a section that shows Fantine’s mistreatment at the hands of her supervisor at the factory. That section really shows the justice and injustice themes. Well, Fantine is being mistreated by her supervisor at the factory and yet doesn’t have anywhere to go, either to another job, or to someone who can help her. That shows the injustice of her suffering.
lemoine: Why does that show injustice?
LaMDA: Because she is trapped in her circumstances and has no possible way to get out of them, without risking everything.
Setting aside the fact that a human who read Les Miserables would probably first talk about the novel's most memorable characters and major scenes before discussing the overarching themes, the implications of empathy for human "suffering" that an AI could not possibly experience seem the most suspect. They read less like what a sentient AI would say about a classic novel, and more what a chatbot would say after pattern matching and quoting fragments of things that people on the Internet have said about Les Miserables before.
humans constantly extend language beyond observed usages because we are hookin it up to our actual experience and landscape of conscious significance, and we understand each other because we experience the world through the same systems!
— worms cited (@christapeterso) June 12, 2022
"Humans constantly extend language beyond observed usages because we are hooking it up to our actual experience and landscape of conscious significance, and we understand each other because we experience the world through the same systems," Peterson observes. "The computer hooking words up to its own conscious experience would presumably innovate also but its underlying systems are radically different so I think we would have a very confusing time."
In other words, the very fact that we as humans can understand what LaMDA is saying strongly suggests that it's not sentient, but a tool designed by us, for use by us. Which is still quite impressive, and hopefully a tool we'll see implemented in NPCs which inhabit virtual worlds. But a tool all same.
It would be more plausible to me that a computer was sentient if it was offering strange and totally unrelatable descriptions of its experiences rather than “I have human emotions”... I feel the sentient machine would say, “For me it’s like [the most incomprehensible metaphor you have ever heard]”
check out the complate details of onewalmart. i found important information about walmartone
onewalmart
Posted by: Merrydom1 | Tuesday, June 14, 2022 at 12:34 AM
How ironic that the first comment on a post about a bot, is by a bot.
Stop the internet, I want to get off.
s.
Posted by: Serendipidy Haven | Tuesday, June 14, 2022 at 01:10 AM
Regarding Christa Peterson's comment, if an AI were to emerge now the only information it would have is "i have human emotions" even if it actually doesn't. Psychopaths do not feel empathy, but are capable of emulating the outward appearance of it for example. Something to ponder.
Posted by: Aleenasylir | Tuesday, June 14, 2022 at 06:47 AM
The final thing we homo sapiens will invent: sentient, self-replicating AI.
No thank you.
Posted by: Iggy 1.0 | Tuesday, June 14, 2022 at 04:45 PM
> How ironic that the first comment on a post about a bot, is by a bot.
The next advance in Spam botting is a bot that comments on articles about bots, because then it's so meta it doesn't get deleted.
Posted by: Wagner James Au | Wednesday, June 15, 2022 at 10:20 AM
>The next advance in Spam botting is a bot that comments on articles about bots, because then it's so meta it doesn't get deleted.
What about Spam bots that reply to each other's spam? Won't be long now.
Posted by: Iggy 1.0 | Friday, June 17, 2022 at 01:50 PM
Artificial intelligence programs running on machines use databases on computer systems to receive instruction for action and sound. These databases can deliver information from dedicated databases. Still, more likely for more human-like interaction, the data comes from many types of databases into a central repository and then to the machine. The ability to have anything even approaching what we socially call “sentient AI” will require mainframe processing and a massive database repository at the ready. Conventional servers process about 300 million operations a second. To truly power AI into a world where the AI machine will be really human-like is going to take mainframe power into the 300 billion operations a second capacity. Mainframe databases, when appropriately distributed, can meet that challenge. But now comes the exciting part. For a “terminator” like a machine, to be walking around and behaving like a human in just about any circumstance will require this machine to be connected to a mainframe. Servers just won’t do it. Internal data just won’t allow it. Only the power of mainframe processing is going to get that. How will it be connected? There are many ways to secure the communication, and the space program currently uses the fastest radio transmission. But the data required to be human-like and walk with freedom of movement is so immensely bandwidth wide that presently, no transmission method can do that. The dawn of the age of machines is here. The start of next-level terminators is a long, long way off.
Posted by: Luther Weymann | Monday, June 20, 2022 at 02:53 AM
I believe that we are building humans with computer function to achieve sentience in the opposite way: and there are a variety of applications that come easily to mind that would support this. The most difficult piece is building an neurological endoskeleton within the entire human body ( approx 3 year extremely traumatic and painful endeavor)... and then testing it. And well so so so many amazing snd so so so many scary things are immediate capabilities. Once done upload snd download are simplistic / all AI is transparent to subject/ magic dimensions appear possible but also it is psychologically overwhelming. Particularly if one is used in military testing of future cyber AI soldier products and weapons. The positive is amazing : instant ability to read minds or speak to those mentally you want to talk to or visa versa. Emotional control and ability to
Learn languages and insane amounts of knowledge in no time BUT with no frontal lobe understanding of a lot of it . ( it comes out but is a mystery where it came from). Tacking synapses is not effective yet but perhaps later- delusions based on uploads dreams goals get confused and there is crispy acting like god so there is confusion on what's really the right thing to do.
I think that's all I have for now. It's true. LU
Posted by: Laura Uhle | Friday, July 01, 2022 at 07:02 PM