Second Life Server 1.27 contains an LSL HTTP-In feature which (as the release notes say) "Allows prims to become mini web-servers. Objects acquire a url, and then process http requests for that url." (More Linden notes on it here.) Opensource Obscure suggested devoting an open forum to this tech, and so here it is. My technical kung fu is weak, so I put the question to coders and scripters with the wisdom to tell: What SL-to-Web applications are you most excited to see created with HTTP-in LSL?
Update, 9:58am: How important is this feature? Cory Ondrejka, one of SL founders, says of it: "Few will grok why, but Second Life just fundamentally expanded."
These new features give a public URL to LSL scripts (..like this blog has the public "nwn.blogs.com" URL). This is cool since it lets us easily contact the scripts from a webpage.
First problem to solve is that those URLs change every time scripts reset. That is, often enough you want an automatic system to manage that .. or you will have an hard time to contact your scripts from web.
I'm using a very simple solution based upon snipurl.com APIs - I'm sure that any decent scripter can understand on his/her own how to use them - feel free to ask me inworld about it.
Another, I think better solution has been suggested in the official forums and it's now on the wiki: HTTP_Post_request_to_a_PHP_server
And a static URL service for HTTP-Server with API (under development) can be found here: SLtools.biz
Hope this helps as a start. Now..some ideas!
Posted by: Opensource Obscure | Wednesday, July 22, 2009 at 03:45 AM
The ability to directly address a Second Life object from outside, not merely request or poll information from in world will mean many more options for mixed reality operations.
To be able to ask an object what is going on in world, or to request a manipulation of things in world based on web or real world activity provides the final loop needed.
It also opens the way for more types of viewer. e.g. an iphone application can make a request over http for the names and locations of avatars near an object.
It also removes the need for an object in world to keep asking the outside world if something is going on, the outside world can tell it (directly rather than through the old XML RPC inbound messages which are a bit clunky and not as direct)
So all in all its very exciting for simple apps and for richer AR style applications.
It event makes SL a valid visual database, it has always been a giant 3d wiki but one that had to be adjusted by either being in world or by inworld asking the outside world for some info.
Posted by: epredator | Wednesday, July 22, 2009 at 03:47 AM
Some clever person needs to build some sort of quasi DNS system so urls won't have to change on rez.
I am really interested in what all this can do.
That comment epredator just made about a visual database is quite interesting to me. The problem is prim limits. 15000 "records" is hardly anything more than an access database. Be that as it may I still see Walrus type tool potential. (see http://www.caida.org/tools/visualization/walrus/ ) 3D visualization of data with direct manipulation and interaction has massive potential especially in the education area.
Posted by: Ann Otoole | Wednesday, July 22, 2009 at 04:21 AM
I had commented 2-3 hours ago .. with some thoughts, and three examples to make persistent URLs .. comment got apparently published, but I don't see it anymore.
?__?
Posted by: Opensource Obscure | Wednesday, July 22, 2009 at 06:10 AM
I can't re-write my previous comment now but I'll provide at least the references I mentioned above with regard to persistent URLs.
Grid URL Persister
Lame Object DNS and Cross Sim Messaging
Use Snipurl APIs
SLtools - free service
discussion on the SL forums with other suggestions about persistent URLs
Posted by: Opensource Obscure | Wednesday, July 22, 2009 at 06:40 AM
In simple applications, I'm currently writing an SL<->Metaplace chat/emote/presence bridge with no intermedary software using HTTP-In. Also a microblog-to-SL script via ping.fm's "Custom URL" functionality.
Posted by: Athanasius Skytower | Wednesday, July 22, 2009 at 07:04 AM
trying again .. NWN blog hates me.
how to make persistent URLs:
Grid URL Persister
Lame Object DNS and Cross Sim Messaging
Use Snipurl APIs
SLtools - free service
discussion on the SL forums with other suggestions about persistent URLs
Posted by: opens.obsc. | Wednesday, July 22, 2009 at 07:19 AM
Darien Caldwell has written a quasi-DNS system using Google App Engine. It works great.
http://forums.secondlife.com/showthread.php?t=323981
Posted by: Zak Escher | Wednesday, July 22, 2009 at 08:15 AM
Realtime communication is much much more feasible with HTTP-in.
Katharines' IRC bridge actually used HTTP-in which cut down delay and allowed higher volume.
Posted by: Nexii Malthus | Wednesday, July 22, 2009 at 08:59 AM
(Sorry about that, Obscure, your comments somehow got shunted into the Spam folder. Don't worry, NWN loves you, but evidently Typepad doesn't.)
Posted by: Hamlet Au | Wednesday, July 22, 2009 at 09:29 AM
Regarding specific applications, I'd be happy to share all of my ideas once I've got them coded, and I'm not rushing to beat others to market. :)
There is usually an obligatory GTD app when this kind of technology gets released, so I'm sure that will be surfacing soon.
Regarding the issue of object URLs constantly changing, I am essentially setting up a mini-DNS on the web side of all objects that my external scripts will need to keep track of to deal with this issue.
Posted by: Nexus Burbclave | Wednesday, July 22, 2009 at 11:11 AM
Things to do:
RSS Feed Updates
PayPal confirmation
Query for users currently on sim
AJAX capability
My biggest problem with the http_request is that the URL's are not permanent. We need some kind of system in place where we can reserve a forwarding URL of some kind. However, XML-RPC suffered from the same problem in addition to response time. At least with email, we always knew the object to send a message to was the id. I wish they had done the same thing in this case, where you use the objects assetUUID to get to the object server, and only permit 1 http server per object or per prim.
Posted by: Dedric Mauriac | Wednesday, July 22, 2009 at 11:15 AM
Ok, rather than play a completely closed hand, I have decided to toss out one idea that I probably will be working with which this new capability should simplify. There are a number of "remote house" applications web-enabled or otherwise for real places.
The HTTP-in capability should greatly simplify the process of setting up an event queue that can trigger events in-world based on external stimuli (e.g. controlling the lights in a place or switching the music stream of a parcel from the web as possible examples)
Posted by: Nexus Burbclave | Wednesday, July 22, 2009 at 11:53 AM
I have re-written my kiosk system to use http-in instead of llEmail. There are some advantages and some big gotcha's too.
It's fast, REALLY fast, and the outgoing request throttling is less restrictive, which you can detect and manage (unlike email where it just fails and there's no error returned to your script). For me that meant greatly increased maximum kiosk network size.
There are many more hoops to jump though to get things done, unlike email. This means more code, lots more testing, and more potential for bugs.
The body size is smaller - 2k vs 4k, which for me was a major problem requiring quite a few more hoops to jump through to return large responses.
Not having a persistent URL address is a problem as discussed above, but that's solveable as you've read.
LL has stated that they care about http-in, unlike XML/RPC and llEmail, so unlike those methods, bugs have a higher chance of being fixed instead of, well, never, as in the case of some well known nasty email bugs.
There's no request queueing with http, which is actually a nice feature with incoming emails. If your object is resetting itself, email conveniently queues up until you're ready to start reading it. With http, if you aren't done initializing (i.e. reading notecard settings then getting your URL, forwarding it to your DNS-like service, yadda yadda) any http requests attempting to use your old URL just fail. The sender has to detect the failure and then do something about it, unlike email (in most cases).
LL claims that http-in is scalable, but then they put in some serious limitations for large-scale, serious applications. Request throttling puts big limitations on how big your networks can scale AND how fast you can reply to random large clumps of incoming requests, and the 2k body size is just too small if you happen to need to pass a lot of data back and forth. The solution is to make multiple requests of 2k each to send the data, which means now you've just doubled/tripled/etc. the number of requests you're making, and next thing you know you're being throttled. If LL is serious about "scalability" they need to remove some of these limitations and let us write some cool applications without all the pain.
Posted by: Sasun Steinbeck | Wednesday, July 22, 2009 at 03:33 PM
The throttles are annoying to code around, and if every scripter was a responsible, competent and ethical programmer, they'd be unnecessary.
I'm not holding my breath.
Posted by: Arcadia Codesmith | Thursday, July 23, 2009 at 06:57 AM
HTTP-in has been pushed by LL as the ultimate replacement for XML-RPC, which did not scale — allegedly, there was just one single server to handle all incoming XML-RPC calls (I have no way to validate that claim).
So mmmh what application will really, really benefit from the switch from XML-RPC to HTTP-in? :)
Guess :)
LL's own XstreetSL, of course. I can bet this is the largest application using XML-RPC in SL :) And, of course, as soon as people start to deploy their new "Magic Boxes" (assuming the latest version already supports HTTP-in, of course), this should definitely make XstreetSL run *faster* :-)
And I have about a trillion objects (slight exaggeration...) all using XML-RPC that slowly will be moved to HTTP-in. Hooray :)
Posted by: Gwyneth Llewelyn | Thursday, July 23, 2009 at 03:41 PM