Last week I noted what looks like a major breakthrough in metaverse technology from Intel, created for the company's Science Sim project, which uses a version of OpenSimulator to display educational and scientific data in 3D: Using a system called "distributed scene graph", Intel developers were able to get over 500 avatars in a single region, moving around and interacting without much evident lag. (Usually lag kicks in with just three dozen of avatars in the same region.) I asked Intel's John Hurliman (who first came to fame as an open source Second Life coder known in-world as Eddy Stryker) for more background:
"We've never tried it with 500-plus actual users because of the logistics issues when getting that many people together for an event," John allows. However, he continues, "We've had several load tests on the ScienceSim grid and encouraged as many people as we could find to log in with as many clients as they could run, but the bulk of the avatars are still our load testing bots. Note that these aren't the vanilla libOpenMetaverse bots that are designed to minimize impact on the simulators; these modified bots try to simulate the actual load of a user by wandering around, playing animations, and downloading all of the prims and textures they see. It will always be an approximation, though, unless we get real users together."
Until then, here's what it looks like:
Even more important, Intel will eventually make this technology available to all OpenSim developers:
"All of the research we're doing in this area is out in the open and we plan on contributing this work to the OpenSim project," says Hurliman. "However, the current code is only really usable for a demo and we need to continue research and work with other OpenSim developers to build a roadmap for integrating this kind of architecture into the OpenSim codebase." In fact, as he wrote in last week's post's Comments, Hurliman believes this technology could even feasibly be integrated into Second Life: "There is no technical reason why you couldn't build the entire SL protocol on this architecture other than the time investment," as he put it.
Via John, you can read more on this technology in this .pdf presentation from Intel Labs' Dan Lake (who presents in the video above.)
It would be interesting to see how it worked with real people behind the avs. Maybe you could be asked to try and get a bunch of volunteers together through your blog - I imagine it would be popular.
Posted by: Hitomi Tiponi | Monday, September 13, 2010 at 02:11 PM
I imagine real people would be less taxing than a well-designed loadbot.
And if that's the case, this could be a fundamental shift in the compartmentalization of OpenSim worlds (and Second Life, if they grasp the opportunity).
It's a hard sell to get name acts to perform live for an audience of forty. But if you can deliver 400, 4000, or more? Game changer, right there.
Posted by: Arcadia Codesmith | Monday, September 13, 2010 at 02:36 PM
If only they had 100s of avatars.
Posted by: Adric Antfarm | Monday, September 13, 2010 at 03:43 PM
Was this a 'real world' test? lol I'd like to know how many of those avs were wearing a zillion scripts and how many had on blinging parts? :p
It's a cool achievement jokes aside.
Posted by: Nine Warrhol | Monday, September 13, 2010 at 04:21 PM
an impressive achievement. hundreds of simulated bots in a seemingly single simulator all without any client scripts or customizations.
Someone else already did it.
Posted by: Ann Otoole InSL | Monday, September 13, 2010 at 04:31 PM
It's possible that this needs a lot more hardware for the region than is usual. Which suggests it isn't going to be commonplace. But some of the event venues could be set up to run on such a system.
Posted by: Dave Bell | Tuesday, September 14, 2010 at 03:20 AM
It suggests to me that perhaps there needs to be a floating pool of server resources, not dedicated to any single sim but available on demand to support population surges in any sim. I'm assuming the scene partitioning in the diagram happens dynamically.
Posted by: Arcadia Codesmith | Tuesday, September 14, 2010 at 06:03 AM
I remember when Hurliman came to us on #Imprudence to help with this test and some of us were there for the stress test (not the tech demo). It does a pretty good job, but yes the bots really needed active scripts to simulate more real world tests. Considering they're marketing it as 500+ instead of 1000+ like they achieved during the stress tests is probably Hurliman being conservative with the numbers to factor in attachments.
Posted by: Ron Overdrive | Tuesday, September 14, 2010 at 11:33 AM
This is highly commendable effort for pushing the capacity limits of an OpenSim region. It will certainly have significant impacts on the applications we are building for simulations and training in-world. We look forward to the day it becomes absorbed in the OpenSim released binaries. In the meantime, if you need live human beings to help test load the upper limits of concurrency, just holler at Avatrian.
Posted by: Chenin Anabuki | Tuesday, September 14, 2010 at 01:45 PM
I like what Arcadia seems to be getting at, dynamic shifting of resources as needs develop. (I tried to express something like this in a comment to Dream Projects recently - tinyurl.com/2ddbfdb) The bird's eye view is scalability applied to (1) avatars and (2) regions. Currently in SL, regions that are tied to a fixed hunk of server farm hardware (one CPU core) and avatars to the (single) user's computer. This design rigidity will never escape the bonds of the limitations we're used to (3-4 dozen avs per region max). And as a populated region gets laggy, most of LL's server farm cores are idling with 0 or 1 or 2 avs to deal with.
Posted by: ZenRascal Mandelbrot | Tuesday, September 14, 2010 at 03:45 PM