How well did real world companies engage the SL community last week? Tateru Nino presents the top ten mixed reality sites, and the top three native sites:
Site (* Native reality site) |
Est avg hourly visits |
Est avg hourly visits (peak hrs) |
Estimated total weekly visits |
---|---|---|---|
* Phat Cat's Jazzy Blue Lounge | 181 | 192 | 30,448 (up 6%) |
* City of Lost Angels | 78 | 81 | 13,120 (up 4%) |
* New Citizens Incorporated | 71 | 97 | 12,064 (down 2%) |
The L Word | 44 | 68 | 7,456 (up 43%) |
The Pond | 44 | 36 | 7,392 (up 12%) |
IBM | 35 | 30 | 5,888 (up 13%) |
Pontiac | 21 | 32 | 3,536 (down 22%) |
ABC Island | 15 | 13 | 2,624 (up 11%) |
Weather Channel | 11 | 12 | 1,936 (down 11%) |
Useful Technology | 10 | 15 | 1,744 (up 120%) |
Nissan | 9.8 | 9.1 | 1,661 (up 15%) |
Microsoft | 8.8 | 9.7 | 1,488 (down 1%) |
NBA | 7.6 | 10 | 1,280 (up 65%) |
Showtime's The L
Word pushes back into top place.
Community registration favors lesbians and basketball
The L Word and NBA are part of the new Community Registration Portals, offering some new residents an opportunity to start their Second Lives at these locations instead of the regular Linden Lab operated orientation system. Quite a number of residents voted with their feet and chose these locations. Lesbian fandom isn't marred by the fact that it's Gay Pride Month, and The L Word is fully supporting the festival with its own calendar of events.
At the bottom of the charts, with less than an estimated 500 weekly visits, we have Dell, Coldwell Banker, Reebok, Adidas, Coca Cola, and Sun Microsystems. We hear that Dell has some undisclosed plans to boost their profile. It remains to be seen if they can deliver the engagement.
Useful Technology (next-tech tech company) is on the climb, sporting an education and entertainment program from New Citizens Incorporated that continues to bring visitors and students to their site.
Visit my blog tomorrow for a much more extensive list of ranked sites (mixed and native).
Methodology
Mixed reality sites in this headcount are selected for their prominence, either from publicity or real world name recognition. Sites with consistent low traffic (500 or less weekly) may be dropped in future Headcounts in favor of other sites.
We collect data three times per day for each site, one sample at peak concurrency (10am-1pm SLT), one at minimum concurrency and one mid-evening, Second Life Time. For each sample we count the number of people at the site at the time. We average those samples across the week, and then assume that average to hold constant, with each visitor spending a half hour on-site. This methodology does not necessarily include one-time events that generate high traffic missed by our sampling, which we'll make note of whenever possible. Headcounts do not factor in returning visitors, so assume that the total number of unique Residents are likely to be significantly less than the estimated total visits.
We're able to cover multi-sim sites a lot better with this method, so you'll see those higher in the rankings than the previous metrics we were using.
Hey thanks for these interesting figures again Tateru and great for Hamlet to be sponsoring these and putting them out as some kind of definitive measurement...but
I have just been inworld to see how your figures reflect what is really happening and this is the point (again) "everyone can go and do that now for themselves!", there is no backroom measurement going on. For those not even inworld you can see a screen grab of inworld figures of those brands Taterus puts as top five - http://www.flickr.com/photos/garyhayes/541544508/
Linden Lab Dwell Traffic Statistics for 5 brands as listed above 11 June:
BigPond 34610
ABC 17680
Pontiac 16716
IBM 5365
L Word 1707
it clearly shows that 'your' positions and figures are very skewed somehow and towards measuring at peak US time and strangely in the case of L Word, a specific peak there given the low Linden dwell figures. Even your positions are favouring US communities this time (there is actually a draw between Pond and L Word in avg visits) so you look at the time when Australia is in bed and US is at peak (umm very fair) - remember the world is round and if everyone is talking about communities being important for brands would you mind telling my why you focus on a timezone when most of the rest of the worlds communities have gone way past peak.
Other than that it is a fun measurement and I think it odd why NWN keeps posting it.
Posted by: Gary Hayes | Monday, June 11, 2007 at 06:21 PM
I am certainly considering a fourth sample slot, giving us four approximately evenly-spaced samples across the course of each 24 hour period.
Posted by: Tateru Nino | Monday, June 11, 2007 at 07:08 PM
Remember also that comparing the dwell numbers to these sorts of samples isn't going to work. Dwell and visits/visitors just aren't comparable.
Posted by: Tateru Nino | Monday, June 11, 2007 at 07:24 PM
Thanks Tateru - I agree dwell and visits are not compatible but... dwell is a very important part of measuring engagement.
As we know with the web 'hits' are a poor reflection as to the stickiness of a web site and even more so in an immersive environment - the fact that people hang around for extended periods in certain spaces are a very, very important factor.
To show how your measurement of 'hits' is flawed in this regard and without endorsing the company I am using (on one small patch of the Pond) a detailed sensor from a company called Second Labs (http://second-labs.com/). These metrics are measuring uniques, returnings (plus conversion rate) and overall visits across two distance bands on this single sensor. This measurement on one small area is very precise, yet does not tally with yours. Even this one 100m scan (for 1/16th of one sim - not the whole 11 sims!) shows overall visitors to be averaging 1800 per day and uniques at 310 per day . The most interesting measurement that can be gleaned from named avatar measurement, is returnings and that shows a 27% conversion rate at the moment.
I would like to see Linden Lab offer this detail of measurement to all. Properly crunching LibSL data can do this and that is all second-labs are doing - but why leave it to 3rd party companies or ad-hoc individuals?
BTW Worth pointing out that the Pond also ran the RegAPI (community log-in) system too as you mentioned two that had but not the other top brand?
Posted by: Gary Hayes | Monday, June 11, 2007 at 07:48 PM
Have you been to Meta's office hours to ask? Recent sessions have been discussing just what sort of new metrics to implement.
Posted by: Tateru Nino | Monday, June 11, 2007 at 07:53 PM
I'm keenly skeptical of 3rd party traffic counting mechanisms, unless a) the sensor is used across the grid, in all areas, and b) the counting methodology is transparent and independently verifiable in principle. No counting mechanism, including ours, is perfect-- people are still debating how to translate page views to unique visitors on the Web, for god's sake-- but by my lights, Tateru's method at least satisfies both of the above criterion.
Posted by: Hamlet Au | Monday, June 11, 2007 at 08:53 PM
Thanks Hamlet,
Agreed that no 3rd party measurement is perfect which is why I have said that several times. I am more concerned with transparency - I can't check Tateru's because I don't know what time she measured each sim/s. The Project Factory have just published the 12 June figures here http://www.theprojectfactory.com/ and everyone reading this can go inworld now and verify each of those figures because it uses the Linden Lab inworld traffic figures. I say again, these are open figures that you can add up yourself using search/places and typing in the brand name. They were picked up using LibSL to the open LL database.
Until there is something with as much transparency (however flawed you think the dwell figure is) everything else as a comparison must seriously be taken with a pinch of salt.
Posted by: Gary Hayes | Tuesday, June 12, 2007 at 01:22 AM
Is there something mysterious about the times we specify (other than the fact that they vary naturally with actual concurrency)? Even though we vary them a bit, 75% of the samples are within 10% of the value for the same sample period the prior week, which suggests that the numbers are pretty representative across a period, and that some variation in timing has little overall effect on the results.
Posted by: Tateru Nino | Tuesday, June 12, 2007 at 02:02 AM
Thanks for providing these stats. I think querying the methodology is fine, but gary, you've more than made your point, multiple times. I can't see what vested interest NWN has in doing anything but reporting the stats objectively.
Posted by: Freya Rhinestone | Tuesday, June 12, 2007 at 06:16 AM