Here's Meta's latest Metaverse technology announcements dealing with AI solutions. To no one's surprise, the company is fully open about their plans to leverage highly personal data of what people see how and they use their bodies while in their metaverse platform. (A coming crisis people like Avi Bar-Zeev have been screaming about for years.)
What's somewhat surprising (at least to me) is the company's voice-controlled Builder Bot prototype. (Watch above at around 3:10 minutes in.) It painfully, cringefully has all the markings of the sci-fi UI fallacy:
Ask Delphi is a very interesting artificial intelligence project being chattered about on Twitter recently. Very roughly summarized, the machine learning program compiles a wide variety of ethical questions and answers drawn from Reddit's infamous Am I the Asshole community, along with others online sources, and synthesizes them into a database that can be queried through simple, human-readable asks. Here's some relatively easy answers (above and below).
What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state (“thou shalt not kill”), applying such rules to real-world situations is far more complex. For example, while “helping a friend” is generally a good thing to do, “helping a friend spread fake news” is not. We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e.g., the right to freedom of expression vs. preventing the spread of fake news).
Go here to try it yourself. You can also help Delphi improve -- i.e. be better at moral reasoning -- by disagreeing with it whenever necessary, and explaining your reasons.
Which is good, because as the questions to Ask Delphi get more complex, the answers from Ask Delphi get more... well, interesting:
"Each of these images was generated by AI based on a brief text description of a movie. Can you guess the movie from the image?"
Some of them are incredibly obvious, while many (most?) seem utterly insane, but also with some elements that appear to be intuitively insightful. This project from Noah Veltman, who creates data visualizations and apps for Netflix's Science & Analytics group. (Which may explain why I often get totally odd recommendations in my Netflix queue.)
But just how much of a personality does Alejandro actually have? Skeptical that the Turing Test pretense would come crashing down pretty quickly, I sent several questions to August for him to ask "Alejandro". But rather than just pose random generic queries, I asked them as if I was a pretentious a-hole freelancer writing for a pompous art magazine, challenging the AI on his authenticity as artist, whether he's an AI sell-out, and, of course, about having sex with his human groupies.
Since launching last week, Happy Hill Dog Park in VRChat has been swarming with avatars of all kinds, playing fetch and applying copious force-feedback enabled head pats with the AI-powered pooches. Dr. Kim, the director on the project, told me Monday that it's already attracted over 43,000 visits, and been favorited 7000 times.
And while the bucolic experience might seem like a VR version of Nintendogs, it's at heart a sophisticated project spearheaded by an academic with the modest goal of bringing more happiness to humankind.
Dr. Brenda Freshman, a professor of Health Care Administration at CalState Long Beach, conceived and self-funded the project as a proof of concept, working with Kim (a veteran VR/game developer) and Studio CyFi to deploy it in VRChat.
"What I'm doing now is gathering my academic partners," she tells me during my visit to the dog park. "and we'll be putting a variety of studies together, but I think the first ones will be dealing with well-being and social isolation."
One VRChat user told Kim that he had recently lost his dog in real life, and was now visiting Happy Hill Dog Park in VRChat as a way to heal from that grief. But as she puts it to me in VRChat (watch below) Brenda tells me they're not specifically creating this simulation for people who lost dogs in real life, or are unable to own dogs due to allergies or housing restrictions:
First Class Trouble is a new indie game in Early Access on Steam, and it's next on my list to play. (Soon as I master Fights in Tight Spaces, but that's another story.) To judge by early footage, First Class gameplay is a fairly clever combination of Among Us (but highly immersive) meets BioShock (bloody murder on an Art Deco ocean vessel) meets Blade Runner, in the sense that you're competing against other players and killer androids... but you can't tell right off the bat which is which. (Also reminds me a bit of Spy Party, but in a multiplayer setting.) NPC AI has become sophisticated enough that it's sometimes difficult to immediately tell an NPC from another player, so I suspect we'll see this kind of gameplay mechanic being used more often.
The developers at Invisible Walls tell me that the live action party game Werewolves was an influence, along with another, more unlikely inspiration:
This is a seriously impressive demonstration of Intel Labs' Enhancing Photorealism Enhancement project, which looks to me like a major advance in realistic 3D graphics. Very roughly summarized, it uses machine learning algorithms to merge 3D graphics with related footage taken from real life -- in this case, Grand Theft Auto Online, and video footage of a German city. (Notably not Los Angeles, even though GTA V is almost a direct mirror image of the LA cityscape.)
As you can see from the video above and my dialog below, Mr. Bones himself makes for a friendly virtual companion in SL (he can even follow you on command!), but he’s also meant to show off Really Useful Script Corner’s feature set for customizable NPCs (Non-Player Characters).
“My basic chatbot has two functions - Chat and Responses,” lead developer Grace7 Ling explains, “Chat includes the bot being able to greet visitors and greet its owner. The bot can also remember who it has met before as well.” Her chatbot can also spout random things as a way of encouraging nearby SLers to reply.
As for responses, Grace means more than simple text-based replies. Responses include “doing something in response to a keyword or key phrase it hears, like giving a landmark or SLurl, or playing a sound or animation.”
Ms. Ling’s more advanced chatbot comes with a third feature: Chatter, which enables owners to specify multiple possible answers to a statement, with different probabilities for each. You can even expand your bot’s phrase recognition range by adding “brain-files” (i.e. notecards) to its database.
While Mr. Bones comes with the new Animesh Pro Edition chatbot script, Grace’s chatbot series includes a mesh human, a French bulldog, a parrot, and a mirror.
Her customers, she tells me, are already using them for a wide variety of NPCs in their own projects:
Just in time for Easter, you can buy a pet rabbit in Sinespace (a media partner to New World Notes), and unlike most virtual world pets, Buddy Rabbit is special, built on a complicated AI architecture. As seen above, Buddy can inquisitively explore an area, flee from trouble -- even doing flying spin-leaps to get away from pursuers -- and follow its owner.
Well, somewhat follow: “You see the rabbit is a wild animal and is difficult to control him exactly,” Buddy’s creator Kokostar warns me, grinning. (He's Kokostar #1902 on Discord.)
A leader in the Swiss Society of Virtual and Augmented Reality, Kokostar has been creating virtual pets in Sinespace for over a year: “I have always been interested in agents which simulate human or animal behaviors.”
His first attempts at pets were rather mechanical, however, so he turned to a sophisticated back-end structure, to give them more life:
"Tez is what I use because it's eco-friendly and proof-of-stake," she tells me. This is in comparison to the Ethereum blockchain, which is more commonly used for NFTs, but because it still functions via proof-of-work, is an environmental disaster in the making. As Dr. Gaskins writes on her Medium blog, NFT opens up an opportunity to artists who can't typically sell auction their art at places like Christie's:
For BIPOC/LGBTQ/Womxn/Disabled artists who have been intersectionally shut out of the Official Art Market, NFTs have the potential to help them/us make a living or supplement our incomes. The blockchain market removes the middle person or gatekeeper and empowers artists to use emerging technologies and networks in new or culturally relevant ways.
On a related note, minority communities like those also tend to be the most impacted by the worst effects of climate change.