« Virtual Worlds Museum Site Launches XR Resource Page For Metaverse Fans, Researchers, and Developers | Main | Join Philip Rosedale's FairShare Discord to Discuss Digital Currency & UBI -- And Start Getting a Basic Income of FairShare Currency! »

Wednesday, January 31, 2024

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Nadeja

Soft is right, I was doing similar things for months. For small scripts and functions, GPT-4 is a time saver, it can also handle regular expressions (regex) well enough. Processing natural language is the job of language models, so it is commonly used for summarizing, making lists, revising and many other things. And it can also help with finding ideas and brainstorming as he said. Analyzing images has been added a few months ago; Soft Linden using it with the area search floater had clever idea!

Note that GPT-4 was available for free since early 2023 until recently on Microsoft's chatbot (now Copilot https://copilot.microsoft.com ). Also, unlike the free ChatGPT, Copilot can search the web for the latest information. Moreover, in the Precise mode (teal color), Copilot provides more concise responses and with reduced hallucinations (the Creative one is the opposite, but it's meant to create, invent things).
However, now the free version of Copilot has GPT-4 only during non-peak times:
https://www.microsoft.com/en-us/store/b/copilotpro
also the default mode, Balanced, didn't use the best model since a long time (and I'm not sure if Precise is still GPT-4 either).

On the other hand, ChatGPT may be more versatile with custom instructions, regenerate answers and has other features. The free version of Copilot, other than web search, can also create and analyze images, has voice, and so on. But I think Soft asked for "spicy and naughty" to Grok, because ChatGPT and Copilot are known to be a little on the puritan side (Copilot would answer, but then the filter would often delete the answer).
In that case, if used as a search assistant, with a similar question asked by Soft to Grok, I asked Copilot something a little more interesting for a SL user: "What are the events in Second Life for February 2024 and which one of them may offer free items?"
https://sl.bing.net/bOGjOVvHy0G (the link will open it in Balanced mode, but it was performed in Precise mode)
Copilot listed the events and concluded


As for the events that may offer free items, the Lovers Lane Grid-Wide Hunt and the Valentine’s Shop & Hop Event seem to be promising. They are offering Valentine themed gifts and bargains. Also, the Love Is All You Need 4! - Grid Wide Hunt is celebrating the month of Love and Valentine’s Day with loving prizes and Exclusives. Please check the event details for more information. Enjoy your time in Second Life!

That's correct and Copilot also provided a link to the source.
Of course you could just search or go to grid-affair yourself, but it's an example how it can extract information from the results and text in general.

Nadeja

Image generators and text generators have something in common: they can produce mediocre or impressive results depending on how you prompt them. For example, with a generic prompt you would usually get a generic image, but if you want a high-quality image, you can add words like "photo-realistic", "high resolution", "golden hour", or "intricate details" to your prompt. Similarly, if you want a better text, various prompting techniques have been developed in the past year. One of them, pretty fun, is to add "Take a deep breath and work on this problem step-by-step". As Soft said, language models can simulate human reasoning to some extent. Another thing that can improve the responses is to treat the model as a sort of friend and talk to it politely and kindly. This is not surprising, since it was trained on human data.

And although GPT-4 has its limitations and you shouldn't expect it to win a Nobel prize in literature for you, that does not mean it is useless or have little use. GPT-3.5 and 4 can write text with a richer vocabulary than most individuals, a nearly perfect punctuation, and basically no spelling errors. So they can help you also in Second Life, with improving instruction notecards, writing scripts, as mentioned, but also with translations, roleplay ideas and more. They are a good addition to the tools you use in SL.

There is also an important difference to keep in mind and don't have wrong expectations. You may have heard "algorithm" and think of programming. However, these models are called language models because they model how natural language can be processed... by a neural network. You approximate and model it with mathematical functions and statistic (then you can also add hyper-parameters, filters, etc) - so you have a software that simulates a neural network that processes the natural language - but the resulting simulated neural network doesn't processes the information like an algorithm. Not only isn't programmed, but trained; but also emergent abilities appeared, so they can simulate also reasoning, and they can code. I.e. the simulator isn't the simulated.

To make this easier to understand, you can similarly run a simulation based on a predictive model of the formation of galaxies: those simulated galaxies aren't algorithms, obviously. Also, as with ANN, you have approximations. You aren't simulating every subatomic particle of those galaxies, it would be computationally infeasible. Same thing with models of human neurons, that exist since a long time, and can emulate the real neurons accurately enough: they would be also impractical for LLMs on current hardware.
So what you should keep in mind is:
- neural networks (natural, like your brain, and their loose artificial approximation) don't work algorithmically and are not exactly programmed for specific responses, like the old ELIZA program. They are trained instead.
- Also they are trained on a huge amount of data, larger than themselves: they cannot physically store all that information (someone compared that to a lossy jpeg image).
- You have missing info and an intrinsic feature of neural network (our brain included) is that missing info and input can be "hallucinated". That's great for in-painting/out-painting or to hide your physiological blind spot, but not so desirable in other circumstances. i.e. these neural net models can and do mistakes.

Therefore you shouldn't expect them to return correct results like a calculator, to be a search engine or, worse, a database with factual information and predefined answers. LLMs don't work that way. They can, however, (especially if you use those that hallucinate less) translate your natural language input into search engine queries and operators and look at the results and then elaborate them further. And they can process language in many other ways.
So, you should know your tools and use them for the right tasks.

Gwyneth Llewelyn

Nadeja's comment that LLMs (and other AI tools) are trained, not programmed, is a fundamental issue that is rarely pointed out in the mainstream. We mostly 'assume' that there is a team of researchers and computer geeks writing a lot of code to 'get things right' (because, well, it looks to us that things like ChatGPT actually write rather good English — even when they're clearly hallucinating).

To understand better how LLMs work, I found a really well-written article using the bare minimum of technical jargon:

https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

The most fascinating bit of this article is the thorough explanation on how we humans do not really know exactly how these systems work at all — and the authors also explain why we cannot figure it out (hint: it would require far more resources that we, as humans, possess).

What is not explained (it might require a whole book, not a simple article with a handful of simplified schematics) is how researchers are able to 'weed out' generated content that is deemed to be offensive, sexist, racist, fascist, etc. My guess is that the work done on that is far more interesting than the rest, because it requires researchers to be aware of how such results are produced in the first place — and this is something we do not know!

Obviously, one thing is just to reject anything written with a list of forbidden words (easy, even considering that such lists, for a global audience, would need to be updated with all possible languages). But there might be subtle things which are clearly offensive (to a human) which can be expressed using regular, neutral words.

Or — worse! — things might have hidden meanings, which the researchers might not even be aware of. Two examples: because of the many word-filtering algorithms out there, neo-Nazis have devised a scheme of word replacements for concepts commonly 'weeded out' by such algorithms. Everyone in the community knows these keywords (some, of course, have long ago leaked out), but a typical person might have no idea what they're talking about (or why such neutral words, in a particular concept, is supposed to be a racist slur, for instance).

The Chinese government also struggles with the opposite issue: critics of the system regularly employ new usages of fairly neutral worlds to freely exchange messages using China's own IM and chatroom systems. These can be monitored as closely as China wants — both with real humans as well as all sorts of AI-based technologies — but the regime critics are clever enough to avoid their conversations being tagged as 'subversive'.

Here is what ChatGPT 3.5 'knows' about this process:

OpenAI, the organization behind my development, takes the issue of offensive content very seriously. They implement a two-step approach to address this concern.

Firstly, during the training phase, the model is exposed to a diverse range of internet text, including examples of both positive and negative behavior. Human reviewers are involved in the training process to provide feedback and guidelines. They follow OpenAI's content policies, which explicitly state that reviewers should not favor any political group.

Secondly, OpenAI uses a Moderation API to warn or block certain types of unsafe content. This helps in real-time content filtering when users interact with the model.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)

Making a Metaverse That Matters Wagner James Au ad
Please buy my book!
Thumb Wagner James Au Metaverse book
Wagner James "Hamlet" Au
Virtual_worlds_museum_NWN
Valentine Dutchie Second Life gift
Bad-Unicorn Funny Second Life items
Juicybomb_EEP ad
IMG_2468
My book on Goodreads!
Wagner James Au AAE Speakers Metaverse
Request me as a speaker!
Making of Second Life 20th anniversary Wagner James Au Thumb
my site ... ... ...
PC for SL
Recommended PC for SL
Macbook Second Life
Recommended Mac for SL

Classic New World Notes stories:

Woman With Parkinson's Reports Significant Physical Recovery After Using Second Life - Academics Researching (2013)

We're Not Ready For An Era Where People Prefer Virtual Experiences To Real Ones -- But That Era Seems To Be Here (2012)

Sander's Villa: The Man Who Gave His Father A Second Life (2011)

What Rebecca Learned By Being A Second Life Man (2010)

Charles Bristol's Metaverse Blues: 87 Year Old Bluesman Becomes Avatar-Based Musician In Second Life (2009)

Linden Limit Libertarianism: Metaverse community management illustrates the problems with laissez faire governance (2008)

The Husband That Eshi Made: Metaverse artist, grieving for her dead husband, recreates him as an avatar (2008)

Labor Union Protesters Converge On IBM's Metaverse Campus: Leaders Claim Success, 1850 Total Attendees (Including Giant Banana & Talking Triangle) (2007)

All About My Avatar: The story behind amazing strange avatars (2007)

Fighting the Front: When fascists open an HQ in Second Life, chaos and exploding pigs ensue (2007)

Copying a Controversy: Copyright concerns come to the Metaverse via... the CopyBot! (2006)

The Penguin & the Zookeeper: Just another unlikely friendship formed in The Metaverse (2006)

"—And He Rezzed a Crooked House—": Mathematician makes a tesseract in the Metaverse — watch the videos! (2006)

Guarding Darfur: Virtual super heroes rally to protect a real world activist site (2006)

The Skin You're In: How virtual world avatar options expose real world racism (2006)

Making Love: When virtual sex gets real (2005)

Watching the Detectives: How to honeytrap a cheater in the Metaverse (2005)

The Freeform Identity of Eboni Khan: First-hand account of the Black user experience in virtual worlds (2005)

Man on Man and Woman on Woman: Just another gender-bending avatar love story, with a twist (2005)

The Nine Souls of Wilde Cunningham: A collective of severely disabled people share the same avatar (2004)

Falling for Eddie: Two shy artists divided by an ocean literally create a new life for each other (2004)

War of the Jessie Wall: Battle over virtual borders -- and real war in Iraq (2003)

Home for the Homeless: Creating a virtual mansion despite the most challenging circumstances (2003)

Newstex_Author_Badge-Color 240px
JuicyBomb_NWN5 SL blog
Ava Delaney SL Blog
Ava