Recent dispatches from the outside world...
Why a 3D datacentre sounds virtually unpleasant: Blogs ZDNet Australia
Can managing server data in a virtual world really be more efficient than the usual means? I mentally shrugged when I first read the news that IBM was creating a 3D data center running on Open Simulator, an outgrowth of Second Life's open source initiative. I understand the power of modeling certain kinds of information in 3D, to make it more immediately understandable and comprehensive. But IBM is claiming that doing this with server data will have a practical utility, too:
"3-D data centers are better able to consolidate the footprint of large numbers of machines only being used at, for instance, 10 percent capacity, to get rid of extraneous machines, and to monitor power and cooling, distribute workload between data centers, and even move processing to cooler sites when weather conditions are unfavorable." At this, I share the ZDNet blogger's bemusement, who wonders:
"Again, though, there's no explanation of why being 3D is any better than a conventional monitoring tool."
But maybe we're missing something, so I yield the floor to my readers with far strong tech kung fu than me. In plain English, how exactly is this a good idea?
Pictured: a server tower model on IBM's continent in Second Life.
This is already done with some home brew monitoring systems that I've seen. Statuses can be monitored at a glance, systems rebooted, or configurations adjusted. Data flow monitoring can also be modeled between units.
A number of telemetry systems have used text-based virtual worlds build on existing open-source engines for remote configuration and testing.
Posted by: Tateru Nino | Wednesday, February 27, 2008 at 01:04 AM
I've been using many kinds of monitoring systems for multinational ISP networks and football-field size enterprise data centres.
We use ASP style monitoring tool that does everything for us, single glance trouble spot detection, metrics and forecasts.
I was talking to my boss about this and we pretty much agreed that it's useless, unless, you can interact physically with your data centre using this as an interface.
We quickly started dreaming about robots moving hardware around, doing cabling and stuff... for that work, this would be very very very nice.
For current stuff, not that much.
Posted by: sty | Wednesday, February 27, 2008 at 03:27 AM
I think what IBM may be trying to get at is that it's one thing to simply stare at pages of text or tables, and another to actually see it all in a glance, floating all around you in a 'tangible' form.
The average human cognitive process is often poorer at mining data on its own than at actually deriving an understanding of what that data amounts to in terms of meaning and implications once it has been properly processed into 'information'.
Properly done, processing data into information and them filtering it through a properly chosen and designed metaphor such as what IBM is proposing here might be an aid of sorts. It may even allow staffers to bring into play other skills that would not be activated in, say, a less visual medium.
Basically, it boils down to something Terry Pratchett once noted: man is the only animal that needs to tell stories and to hear them. Without them, the world becomes a chaotic, unmanageable mess. And often, we get told the exact same story repeatedly, but with different layers of complexity as we get used to coping with more and more detail.
Man may or may not be ready for a Ghost-In-The-Shell-esque User Interface of sorts now, but there is no reason to believe that that will forever be the case. In fact, with the way kids are being exposed to more complicated and advanced gameplay via ever improving home console and PC technology, and the cost of the technology required to produce these experiences becoming cheaper and cheaper (until you compensate by making the experiences more advanced xD) , I believe it is only a matter of time before control methods such as the virtual Zion flight control room seen in The Matrix Reloaded become a matter of reality.
/Boy, this comment sure went Prok style.
// Hope it makes more sense though!
Posted by: Patchouli Woollahra | Wednesday, February 27, 2008 at 03:31 AM
Well I think Patchouli has said it, so I will just reenforce that.
Firstly, the notion that a virtual world is only a place of escape is no longer true. Data and things are flying in out and around them.
So at one level this proof of concept of data interchange. Something in one environment changes something in another and vice versa.
Secondly, just as things moved from command line to windowed mouse environments there is no reason not to explore the VW side of things.
Thirdly we start to help people understand and visualize what they are doing and what the massively complex systems they operate are doing. It does not mean that everything will need to be done this way, command lines can still be used by people, but for humans to understand we know a picture paints a thousand words.
Data Centres are one element of a system, they are the most easily instrumented, however they form part of an entire business with people, offices, politics, customers and business processes. Can anyone in any business today just easily say "yes I know what is going on?" or do they have to sit through ppt and spreadsheets to gain some sort of understanding? If we use more visual and physical cues and have the business visualized in its entirety, then people can work in it, see the consequences of actions, get a feel for "status" all those things we currently have filtered out due to complexity.
So in many ways the arguments about why have a data centre are the same as why would people choose to interact with other people (even in a social setting) in a virtual world. Its because it adds something to our perception that we loose through traditional electronic media.
Posted by: epredator | Wednesday, February 27, 2008 at 06:08 AM
Thanks for the background, epredator, that's helpful. But I'm still curious about the specifics mentioned in the passage from the IBM press release I cited ("better able to consolidate the footprint of large numbers of machines", etc)... can you explain that a bit more?
Posted by: Hamlet Au | Wednesday, February 27, 2008 at 08:00 AM
http://www.caida.org/tools/visualization/walrus/
3D visualization gives you a better comprehension of the big picture.
Secondlife would benefit greatly from having it's code and data architecture dependencies graphed with walrus. there is so much that can be done but hey... most people can't comprehend the really big picture so its sort of tiresome to bother explaining it.
people that know what they are doing use it all the time. take a look at the examples posted of use of Walrus. it is nothing new nor is it novel. its just something serious meta analysts do that most people are not capable of comprehending because they do not have the mental framework to support advanced concepts. your born with such mental frameworks or your not. same as how chess masters are born not trained.
you either "see it" or you don't.
Posted by: Ann Otoole | Thursday, February 28, 2008 at 02:58 AM
Must resist... joke...
Ah hell...
Does walrus hav... bukkit... rendering?
xD
Posted by: Patchouli Woollahra | Thursday, February 28, 2008 at 08:07 AM
lulz!
Posted by: Hamlet Au | Thursday, February 28, 2008 at 09:33 AM
I am still with Hamlet on this when it comes to the actual daily operations of a data centre. Making avatars have to walk or camera over to a 3D virtual server in order to get information or initiate an action is far less efficient than pounding out a command line or using a flat interface.
How simply changing to a 3D screen interface to the very same RL machines is going to save millions of dollars and reduce data centre power consumption is still a mystery to me too. If anything, 3D interfaces and environments require more computational power, not less.
However, for designing and modeling a data centre, even simulating its operation, a 3D approach makes a great deal of sense and a powerful application.
Also, I find Ms. O'toole's reasoning self-defeating (and a touch condescending) for if most people 'don't get it' then it is doomed to failure as a widely-adopted approach.
Posted by: HatHead Rickenbacker | Thursday, February 28, 2008 at 12:01 PM
Sounds like a great tool for those high level executives that can only see the company through optimistic powerpoint slides. Or Dilbert's boss. This way they can look hip and pretend to be working at the same time :)
Posted by: Renmiri Writer | Thursday, February 28, 2008 at 06:16 PM
apologies for the vitriolic nature of my comment. 3d visualization is a sore spot with me. i have been both rewarded with bonuses and decried by flat earther executives for my promoting the concept. the issue is simple. don't try to convince executive decision makers that something they cannot comprehend is good for the company. they don't like to look obsolete and will engage in constructive termination activities to let you know you and "your kind" isn't welcome.
use the technology. if you can make it a cost effective productivity enhancing part of your solution then just do it. same concept as myriads of cron shell scripts out there running today. they don't see the glamorous light of day but they are getting the job done. if your looking for problems in code then figure out how to make a tool like walrus work for you and use it. there are commercial code analysis tools that will do that job in a squashed 2D representation. but walrus and the other neat tools at caida are free. no need to justify a chunk of the project budget for them.
Posted by: Ann Otoole | Thursday, February 28, 2008 at 09:51 PM
Usually I would think that this
"better able to consolidate the footprint of large numbers of machines" is the sort of terminology they use in data centres :-)
In general getting a sense for the volume of usage and general location of machines may mean that it is possible for a person running the operation to see a better way to use them. We obviously have some automated provisioning systems when machines get busy or need to do something else, but there is always the human aspect to apply to this management.
If we are dealing with pure machine in pure data centres then any sort of dashboard works to get the status. The key here is bringing in some awareness of other factors.
The important point is the representation of real world and virtual world things merging. Sometimes a 3d view gets across more information about a situation. Sometimes a blinking red light does the job.
If the cost of a 3d representation is actually the same, and the people using it get to convey more information to one another then it seems worth trying.
The element of experimentation in this should not be overlooked. If noone hooked up a live data centre to a visualization like this then we would only be guessing if it is a workable idea.
Posted by: epredator | Friday, February 29, 2008 at 01:32 AM
Back in the olden days of 1991, I got from my boss a unique assignment: develop a graphical model to show the flow of communication between software components on a network of computers, so that students, researchers, and system administrators could, at a mere glance, have the "feeling" on how the "grid" was performing. I was scared because I thought I would be laughed at when developing that kind of thing — and I was right. When I left my brief research career a few years late, and started working at "real" data centres, it was clear that graphical displays of images were "toys" — real, hard-core system administrators used shell commands.
A few more years, and the "hard-core system administrators" started using tools that costed millions of dollars and looked pretty much like the "toys" I had played with. I frowned at the new generation; I was taught to shun the pretty graphics and get back to work instead. "Real" system administrators continued to use shell commands. Newbie sysadmins, or people who used Windows servers, well, they could play with the "toys".
Nowadays, however, I'm told that all the "real" system administrators use graphical interfaces. In fact, I'm utterly shocked when I meet this new generation of self-proclaimed sysadmins that use web-based utilities to configure servers. When I ask them if they know how to run a Linux server, they answer: "sure, I do know Plesk/cPanel/webmin very well".
Now I see IBM researchers playing around with 3D models to give visual feedback of how well a data centre is performing. I'm sure that my boss in 1991 would have been delighted with the idea! He was certainly open-minded enough to do the same, if we just had powerful enough graphics cards back then (the software I used did actually some primitive 3D, but it was too slow for real-time monitoring).
However, I'm also pretty sure everybody's laughing a lot at IBM right now, for having the courage to go one step further with their monitoring technology. As always, if you're ten years ahead of your time, you'll be laughed at. I lost my own opportunity back then; I guess I should have listened to my boss more.
I wonder what people will say in 2018, though ;) It wouldn't surprise me much that in my future interviews on potential candidates as system administrators they'd say: "sure, I know how to configure a Linux system; I know all about Second Life". I'm smiling about the idea right now... but... the IBM researchers are not.
Posted by: Gwyneth Llewelyn | Friday, February 29, 2008 at 04:44 PM
It took me a minute to digest the IBM release myself but then it clicked and I totally understand it. It's an experiment in a new way to rationalize data.
Given that IBM has released a SameTime bridge for SL not to mention the basic HTTP support already available, I can't help but imagine that there is two way communications going on in this system.
I've been dealing with several projects where we're now looking for new ways to rationalize data presentations in 3-D, and why build a whole new platform to do it when it can be done effectively in SL?
Posted by: Zee Pixel | Wednesday, March 05, 2008 at 02:12 PM