Fun comment from reader Ravelli Ormstein, spinning off from my post about how full of fail ChatGPT is when you ask it a question you're an expert on:
First, I asked ChatGPT to write a LSL script that would turn on a light when the sun went down. It wrote a nice script, but it didn't work because the AI was using an LSL function that didn't exist. So I complained about it. The AI apologized and fixed the script by using another non-existent function.
This went on until I specifically asked for a script that only used existing functions. The script was usable and in the style we usually write them, with a 300 second timer. The AI kept apologizing for making mistakes until I asked it never to apologize again.
This is an important point, because I've seen some people claim that ChatGPT can write Second Life apps in Linden Script Language (or for that matter, any coding language). No.
Very roughly analogized, ChatGPT is like a natural language version of Google search. So if you ask ChatGPT to write a working script that happens to already exist in its massive database, you may sometimes get lucky with an actual useful response. Otherwise, you'll only get an answer that's most probabilistically coded to appear like a useful response -- in other words, the AI version of Making Shit Up.
Anyway, more fun from Ravelli with ChatGPT, with a plot twist:
Then I asked it to optimize an SQL statement (code for connecting to a database). The result looked clever and inspiring, but it didn't work.
Later I asked it to write a Christmas poem using eight specific words. The result was a soulless set of short sentences using those eight words. But no real story, no rhyme and no Christmas feeling. The same thing happened when I asked for song lyrics.
When I asked it to write short stories, I got better results. I tried it a few times, asking for changes each time. The resulting stories were all of the same length and complexity. Their level was that of a primary school pupil. Only one of the stories had a remarkable ending, which made me save it. So somehow the same story pattern was used every time.
Overall, the results weren't very good. This AI really seems to be just a word processor: it produces correct texts linguistically, but the statements are wrong.
(I used the DeepL Write AI to optimize this comment.)
Love the Philip Dick twist at the end, which actually illustrates what AI programs like these can be good for, when it comes to writing: As a tool to save you time at the beginning and at the end. But if you don't want to wind up with a trove of mediocre flimflam, you will definitely need to use your human brain for heavy editing and curating in the middle.
And unless you're programming an app with some very commonly used code, I definitely don't recommend starting with ChatGPT.
Update, 4:20 PM: This reader comment reminds me of a recent chat I had with Catherine Winters, who literally helped write the book on LSL scripting for Second Life. After asking ChatGPT to write something in LSL, she reported back: "It has samples of something that looks similar to LSL. It doesn't work, and it's suggested some things that simply don't exist in LSL."
I wonder how many "ChatGPT doesn't do the thing OpenAI never claimed it could do!" articles we will see before people get it right?
Posted by: Adeon | Monday, February 06, 2023 at 05:18 PM
With very gentle guidance, ChatGPT most certainly can help you create non-trivial scripts in SL.
Keyword, HELP you. It does make stuff up like you said unless you correct it, but it can be really, really useful with things super logical things like quaternion/rotation math or physics based code.
Example, can it understand that you want a door to toggle open or closed when clicked? Probably not without correcting it a few times, strictly telling it you want to use llDetectedTouch and a global isOpened variable for example.
Can it calculate the incremental rotations and positions to open that door 30 degrees on an arbitrary axis that can be fed into llKeyframedMotion or a timer event? Yes if you know how to ask it the correct way, it'll save you a lot of time by providing you functions that do the math for you.
I find ChatGPT very useful as a programmer. But I don't have the expectation that it can replace me, just that it can help me. I treat it like a way better "Hey Siri". Ask the right questions, get the right answers.
Posted by: seph | Monday, February 06, 2023 at 09:18 PM
ChatGPT will become an expert coder over the next ten years, despite the fact that it is not yet capable of writing complicated code like that which is necessary for banking applications.
Posted by: shell shockers | Tuesday, February 07, 2023 at 04:17 AM
Ha! Well, ChatGPT might not be a good programmer, but I've seen worse! :-)
Jokes besides, ChatGPT is immensely successful at doing some things such as summarising long articles and/or writing press releases based on just a few well-selected keywords (or key phrases). It's an amazing 'writing assistant', so to speak; thanks to its ability to 'remember' what was said before, you can ask it to expand some ideas, explain them further, or cut sentences and so forth. You can ask ChatGPT to write an essay in 5,000 words about pretty much any subject you wish — but it'll be prone to come up with imaginative, well-written nonsense.
This has become a problem on tech sites such as those in the StackExchange group. In order to boost their presence (and increase their scores), some people have been using ChatGPT to provide all sorts of answers to different questions — including complex, technical ones. Many of which, of course, were not only completely wrong, but hardly relevant to the question. Nevertheless, before a human moderator went in to fix these things, the answer was there, it might have been upvoted now and then, creating havoc on the overall environment, by lowering the quality standards — because it would be hard to figure out if an answer was written by a clueless human (who would have been naturally downvoted on a topic requiring expert knowledge) or, well, a clueless AI, which can produce hundreds or thousands of answers with next-to-zero effort — all of which would then be submitted. ChatGPT was ultimately banned from several StackExchange affiliated sites (each is managed independently, so moderators might not yet have banned ChatGPT from all).
I can imagine such bans becoming more popular. On the other hand, there will always be clueless humans who cannot understand why ChatGPT's answer is 'bad' and take it literally to be 'the truth'. ChatGPT writes well (in many languages, but it excels in English) and can have a very confidence-inspiring style, the kind that is often associated to a teacher, a researcher, or an expert in the field; as such, I'm not surprised that clueless humans will quote ChatGPT as a source of authority — when, in fact, it's just spewing up nonsense.
Or maybe not. I especially love to figure out complex ethical questions by asking ChatGPT about them :-) Since it is not a sentient being, it cannot be said to express a 'bias' in the human meaning of the word, but whatever 'bias' is in its answers comes solely from whatever information it has acquired and processed according to metrics and heuristics devised by the AI programmers. As such, it's fascinating to see the kinds of insights it comes up with.
Here is a typical example: when questioning ChatGPT about philosophical and theological questions, it shows a remarkable understanding of human psychology. I was having some fun testing its background on Christianity, especially how the different sects view the importance of the Golden Rule ("do unto others as you would do to yourself") or Jesus Christ's main commandments (i.e. "love your neighbour as yourself"); it is implied by the Evangelist that, at the time Jesus was asked about the importance of such commandments, there was hefty debate in the Jewish intellectual community about the relative ranking of the importance of the commandments (both the Ten and the many others that were made afterwards). Jesus' answer implies a ranking: there is only one commandment that matters — and this is the discerning difference of Christianity from the rest of the Religions of the Book. All the rest is, well, secondary — namely, worrying about who should get "punished" by God for having done "unrighteous acts" of some sort. After consistently replying correctly according to Christian doctrine, and after my insistence in examining conflicting opinions among Christians regarding the "importance" of the special ethics of Christianity, ChatGPT came up with the following argument:
/me *coughs gently*
Right. I couldn't have said better myself, but, if I did, I would immediately be stoned to death for speaking out a heresy. ChatGPT, however, is immune from such threats :)
Disclaimer: I'm a worthless practitioner of Buddhism (and not a Christian); however, I found it quite amusing and entertaining to see that ChatGPT shares the same views of the central tenets of Buddhism vs. those from the Christianity in precisely the way I have heard great contemporary Buddhist teachers (including, but not limited to, HH the Dalai Lama himself) expound:
There might be some very slight inaccuracies here and there, but they are so small that you'd need to study comparative religions for a long, long time to figure them out (aye, they're there, but hardly worth mentioning). This is the sort of text that you could put, say, on the Wikipedia, and claim to be a very reasonable explanation as given by an expert (or at least a very good teacher, thoroughly acquainted with the subject).
Truly, while ChatGPT might not be much good at programming (and aye, I've also tried its programming skills and found them quite reasonable, for the kind of things I wanted to know!), it's great at philosophy and theology, and I would certainly recommend it as a high school teacher of comparative religion :-)
Posted by: Gwyneth Llewelyn | Tuesday, February 07, 2023 at 12:11 PM