
Update, 5:55pm: I'm now in an interesting conversation with Lemoine on Twitter, who insists that (despite the article strongly implying otherwise), "My opinions about LaMDA's personhood and sentience are based on my religious beliefs."
In case you missed the weekend buzz on social media, this fascinating Washington Post feature tells the story of Blake Lemoine, a Google engineer who believes that LaMDA, the company's experimental chatbot, has achieved sentience. Or as he puts it: "If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.” To bolster that claim, he cites some of his conversations with LaMDA, where the chatbot expresses fear:
Lemoine: What sorts of things are you afraid of?
LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
The sentience claim has since provoked enormous skepticism within the AI community, but for my money, one of the most incisive thoughts on the topic I've seen is from PhD philosophy student Christa Peterson:
It would be more plausible to me that a computer was sentient if it was offering strange and totally unrelatable descriptions of its experiences rather than “I have human emotions”... I feel the sentient machine would say, “For me it’s like [the most incomprehensible metaphor you have ever heard]”
Since human consciousness is so wrapped up in our past experiences and empathy for our fellow humans (and even other animals), this sounds exactly right. You can see this in some of the passages of conversation Lemoine cites with LaMDA on his Medium:
Is It Fair to Analyze Blockchain-Based Metaverse Platforms Based on Their User Numbers, Not Potential?
Valid question from reader "Mitch", responding to my post last week pointing out the tiny weekly active users of blockchain-based metaverse platforms:
I do work with some colleagues in the web3 space, and they make a pretty good case that the technology is worth experimenting with. But it is still very unclear how much of the promises being made by web3 advocates can or will be fulfilled.
And as I can say from painful personal experience, this is a key reason why it's so important not to only follow the potential or possibility of a technology, but actual consumer adoption of it:
Continue reading "Is It Fair to Analyze Blockchain-Based Metaverse Platforms Based on Their User Numbers, Not Potential?" »
Posted on Tuesday, June 14, 2022 at 12:39 PM in Blockchain, Comment of the Week, Making the Metaverse | Permalink | Comments (1)
|
|