Intel's Science Sim project, which uses a version of OpenSimulator to display educational and scientific data in 3D, has achieved what looks like a giant milestone. Using a process they call a "distributed scene graph", Intel engineers have managed to scale an OpenSim region so that it can optimally manage over 500 avatars at once. Watch:
The presenter, Intel Labs' Dan Lake, says they've run demos with around 1000 avatars in a region, and plan to scale this process further to handle several thousand. I'm somewhat skeptical it can work as well in the field, especially since it looks like this demo depicts several hundred bots. But it's absolutely worth following further. And so I shall.
Hat tip: Digitalurban.org
Best SL related news I heard in months!
Posted by: shaqq | Thursday, September 09, 2010 at 02:33 AM
I wonder how many of those bots could be supported by a Second Life server instance - but only LL could answer that. Please don't reply "a few dozens" quoting SL regions avatar limits - I meant how many *test bots*, not *actual avatars* with lots of textures to download, attachments and scripts that run on server: that must be a relevant part of the load that sometimes bring SL servers to their knees.
If I remember correctly, testing sessions of this size are not new for Opensim developers. Awesome and promising.
More info about Opensim:
* Opensimulator project
* Justin Clark-Casey’s personal website with regular updates about Opensim development
* News about Opensim (made by @drWhiet with paper.li, it's generated from updates from people who are in my Opensim Twitter list - please suggest more people for this list if appropriate)
Posted by: Opensource Obscure | Thursday, September 09, 2010 at 02:50 AM
The problems here should be obvious: It's great that a simulator can be able to handle 500 avatars, but currently your frame-rate plummets just by seeing more than four of them. Client-side lag wouldn't change and still needs to be seriously addressed.
Posted by: Adeon Writer | Thursday, September 09, 2010 at 06:20 AM
@Adeon, this should be addressed indeed, but how? People need to themselves change mostly, you can try to throw as much technical specialists as the code to improve the renderer, but the problem will only grow due to the inherent issue with UGC (User Generated Content).
What we need, is better tools for analyzing content for lag. One of the thoughts I had to attempt this was using the cool fast timers implementation in the client and 'attach' them dynamically to pieces of content. As well as provide tools for analysis of memory usage of a piece of content. With the right tools, people can help themselves help everyone. Down with ARC and up with real tools.
Posted by: Nexii Malthus | Thursday, September 09, 2010 at 06:45 AM
Intel accomplished this by creating a second class of users. Today, everyone who logs in has access to all the features of the region -- such as building.
The Intel test creates a "read only" class of users -- people who log in and can only see the region, not interact with it. And they log in through an intermediary piece of software that handles them, taking much of the load off the server.
If you were having a large meeting or a concert or a conference, for example, you would have your organizers and speakers, say, log in directly, and all your visitors log in through this aggregator thingy.
More info about what they did here:
http://www.hypergridbusiness.com/2010/06/sciencesim-demos-1000-avatars-on-a-sim/
-- Maria Korolov
Editor, Hypergrid Business
Posted by: Maria Korolov | Thursday, September 09, 2010 at 07:22 AM
Interesting news, even if they are "Read only" their new processor looks very powerful. Since Opensim lets you "outsource" land functions or inventory functions, etc.. in theory my (or anybody's) Opensim server could have 500 avatars by outsourcing scene rendering to an Intel's server somewhere.
Posted by: Renmiri | Friday, September 10, 2010 at 06:59 AM
Very interesting video, thanks for sharing with us!
Posted by: OpenSim | Sunday, September 12, 2010 at 02:20 PM
Maria, your article you linked to misses one important bit. We didn't create a new class of read-only avatars, and the architecture that allowed a region to scale to 1000+ users has nothing to do with restricting build access. The reason those avatars can't build is because this is an alpha quality demo and only the most visually compelling parts of the protocol were implemented in the new architecture: movement, appearance, and animations. There is no technical reason why you couldn't build the entire SL protocol on this architecture other than the time investment.
Put another way: when an avatar is moving around it is "writing" to the scene by changing its position. The movement update has to be processed by the physics engine along with the movements of the other 999 avatars, and all of that information is broadcast out to every connected avatar. That is happening tens of times per second per avatar. Creating and editing content is easy to handle once you solve 1000 avatars interacting with each other in a scene.
The technical details of the distributed scene graph are covered in Dan's slides at http://vw.ddns.uark.edu/X10/content/Extensible%20Virtual%20World%20Architectures_Slides%20%28Lake%29.pdf
Posted by: John Hurliman | Sunday, September 12, 2010 at 11:24 PM
well OpenSim can handle more avatars but it is hardware dependent and this is something you can have control over. i have 16 sims but spread over 4 cores and 8 gigs of RAM
if i went to only one sim with the same resources i *think* i could have close to 100
more RAM equals more avatars since each "real" avatar can use up to 250 MB of RAM (Ruth bots might be as low as 20 MB)
more cores would mean less lag also
Intel's test was under ideal conditions (and lots of hardware) but it is neat to see these things being stretched =)
Posted by: Ener Hax | Monday, September 13, 2010 at 06:12 AM
John is correct as far as I understand it. The number one source of lag is a number of avatars moving around at the same time. The servers have to calculate the motions and locations and redraw the scene with the new avatar positions. If this new technology can handle the movements issue, textures and user content won't be that much of a factor.
Posted by: Ajax Manatiso | Monday, September 13, 2010 at 09:01 AM