Since no post about virtual pro wrestling is complete without a follow-up about the AI-driven death of the Internet as we know it, here's a provocative new Philip Rosedale essay predicting just that, in what he dubs the Ultraviolet Catastrophe:
AIs are now easily able to create accounts, establish IP addresses, and send plausibly important or interesting messages at effectively zero cost. This means that the internet will soon be saturated with websites, social media, YouTube, and Twitter accounts creating messages at millions of times the rate at which humans can create content. Worse yet, many of these messages will deceptively claim to be from real people.
[Y]ou may be also suffering from the fantasy that these messages for some reason won’t fool you. In sufficient quantity and quality, they certainly will. And even when they don’t, they will “flood the channel”, making it impossible for you to find actual useful information.
I've definitely seen early signs of this effect while researching my various writing projects.
This is so bad.
— Wagner James Au (@slhamlet) May 22, 2025
In the legendary D-Day scene of "Saving Private Ryan", @tomhanks says something like "gatac gatac" into a walkie talkie. So I checked Google to find out what he said.
And @GoogleAI... just totally made some shit up. 👇 pic.twitter.com/cQChICkTDr
Google Search is being eroded by "AI summaries" which are often wrong or just rampant bullshit, even on well-known topics (see above!), while whole channels of content are AI slop. Earlier today, I came across a YouTube channel called "Second Life" which has nothing to do with virtual worlds, but is instead a slew of weirdly messianic gen AI videos. (No, I'm not linking to it.)
Anyway, as a consequence of all this, Philip believes we'll begin withdrawing from the Internet as we know it -- though I'm not sure this will include legacy virtual worlds (i.e. Second Life, etc., but more on that down the way):
How will we respond? At a high level, we will simply have have to stop using open/public services like email, TikTok, YouTube, or social media which have a low cost to send messages. Is this a bad thing? I’m not sure. We’re already overloaded with low quality information, why not just tear off the band-aid? In a couple years we may look back and laugh at how ridiculous it was that we tolerated spending so much of our days skipping over solicitous junk.
Would he apply this takeaway to our use of Second Life? There's already quite a bit of bot spam of all kinds in SL now, I told him, and AI is only making it worse.
"Not sure but I agree that Spam and bots are as problematic in SL as anywhere," he replied.
There I'm not sure. Second Life, for instance, has been plagued by traffic bots creating the appearance of busy sims, for at least 15 years. And I fear adding still more AI / NPC tools will likely lead to much more, degrading our perception of the virtual world as teeming with actual people.
"The identity and moderation systems need to be improved to better allow communities/groups/landowners to control access," Philip acknowledges. "That is already an obvious pain point with people setting tenure-based rules on accounts with scripts, etc. Yes, I agree that AI can make it worse."
Anyway, read the rest here, and don't miss my extended interview: Adding A.I. Powered Characters to Second Life: Philip Rosedale & Brad Oberwager on the Promise and Perils.
Update, July 2: Added some more thoughts from Philip!
This is both fascinating and unsettling. I’ve already felt that creeping sense of "signal loss" online where genuine voices are drowned out by algorithmically generated noise. The idea that we may retreat from open platforms isn’t far-fetched anymore, sadly.
Posted by: Golf Hit | Tuesday, July 01, 2025 at 09:56 PM
I watched news - remember news ? - then email swamped by greed and pure mischief or malice - a true tragedy of the commons.
That is has not come to that - so far - in Second life is more a matter of luck than good governance.
I reckon many SL users agree with that assessment , which is the basis of the resistance to AIbots in SL. Don't think the Lindens will agree though, sadly
Posted by: Bavid Dailey | Wednesday, July 02, 2025 at 04:48 AM
I won't miss Social Media, at all. I barely use it, save to connect with a few friends and family on one platform. I don't look at influencers or "promoted" content. Never used TikTok, Snap, Instagram, or Bluesky. When Rocket Man at Twitter let a seditionist back online, I nuked my account there.
Smart phones and social media have helped to reduce attention spans, promote misinformation, and turbocharge Far-Rightist ideologies in a way I thought dead after 1945, or at least the late 60s. The Crazies, anti-Science cultists, and haters now have a big bullhorn globally, and that's more a threat that AI spam IMHO.
Maybe we'll become Stephenson's The Diamond Age, where artisanal crafts made by human hands will accrue value. Or maybe we'll just collapse; a different Diamond comes to mind here: Jared. His book Collapse shows that we have currently most of the indicators for societal collapse based upon environmental destruction and over-consumption.
I don't wholly buy into his thesis, because soon I think a nation, without permission, will try geo-engineering on a large scale. On the other hand, AI could accelerate other trends he mentions.
Posted by: Iggy 1.0 | Wednesday, July 02, 2025 at 12:20 PM
According to this ai read short, the Saving Private Ryan code is CATF (Commander, Amphibious Task Force). But good point that ai cant always be trusted. Sometimes it's dumb, deceptive even. So always good to double check with other sources. Same with everyone going to just the wikipedia, it's not always right.
I saw there being the ability to make ai npc's in roleplay in sl. This sounds good but roleplay is already dying, population wise. People are just getting tired of it all really. AI npcs might liven up things, depending on how they're used, but I can imagine whole sims full of that and just a few roleplayers talking to themselves. It's all becoming Westworld. Now there's an idea. ;)
Really though ai can be good, it's what one uses it for. Unfortunately the root of all evil is love for money so you've got greedy people using it. What could go wrong already is going wrong.
Posted by: Salty Bob | Thursday, July 03, 2025 at 04:44 PM
Here's the link to the video I found (I forgot to post) on the saving private ryan thing: https://www.youtube.com/shorts/ibi1KkaSsLY
Posted by: Salty Bob | Thursday, July 03, 2025 at 04:46 PM
> "@tomhanks says something like "gatac gatac" into a walkie talkie. So I checked Google
> Google Search is being eroded by "AI summaries" which are often wrong or just rampant bullshit, even on well-known topics (see above!)
So you searched for "gatac gatac", of which also the regular Goggle search engine can't find anything except for your tweet, your blog and little else. Well-known my ass.
Posted by: xpert | Thursday, July 03, 2025 at 08:44 PM
This is what you would find if you didn't see the movie and you use your search terms
https://www.google.com/search?q=%22saving+private+ryan%22+%22gatac+gatac%22
add also "dialog" and the results are basically none. Entering as you did 'saving private ryan gatac gatac dialog' without quote-marks also returns very little.
With no data, it's more prone to hallucinations.
Even though AI Overview isn't so reliable (), for what is actually searchable, and depending on what you search, it is usually rather ok or decent, sometimes it's imprecise or rather wrong, and then there are laughable edge-cases.
The fact that you had to resort to this misleading stuff to cherry-pick an example is very telling.
Posted by: xpert | Thursday, July 03, 2025 at 10:06 PM
> With no data, it's more prone to hallucinations
Yes, this is a huge problem, especially when AI Overview takes up the top page of the most popular search engine in the world, used by billions of people, most of whom don't know what an LLM even is, and just assume it's always accurate.
And it hallucinates often, even with life or death questions. In this example, it totally misstates the diabetes rate in the US:
https://x.com/slhamlet/status/1879277436208386521
Just last month, the US government published a health "report" that included multiple fake sources totally made up by an LLM:
https://www.nytimes.com/2025/05/29/well/maha-report-citations.html
Posted by: Wagner James Au | Saturday, July 05, 2025 at 07:26 PM
Great link, Salty, thanks!
Posted by: Wagner James Au | Saturday, July 05, 2025 at 07:27 PM
So use Duck Duck Go, where you can turn off AI Assist easily.
I find Google Workspace useful for my professional work, but their search engine is shite now. Gemini AI is not all that much better.
Bing was better for a while, but now it's choked with targeted results.
As far as professional use of AI, for research in academic fields, Research Rabbit is very solid. It points the way forward to a better use of AI by giving links to sources you can verify; in its case, all sources are peer-reviewed and open-access materials in academic fields.
Lazy students who turn in hallucinated materials (I do make them use AI for lots of preparation and drafting) get an F on the assignment. They then have to redo it and I average the grades. As I tell them, on the job you'd have just been fired. Learn now not to do this.
Posted by: Iggy 1.0 | Monday, July 07, 2025 at 05:24 AM
I'd say for platforms like YouTube, it won't even matter if you 100% know the content was made by a human. Because it will still be made the way the algorithm wanted it to be made. It's not even genuine at that point. If all is made for the algorithm, being human-made doesn't make it much better.
Posted by: Adeon | Monday, July 07, 2025 at 04:13 PM