Pictured: From a 2014 Philip Rosedale presentation, in which he describes High Fidelity's goal of achieving "mouth to ear" latency in 100 milliseconds
Excellent reader conversation on my analysis of the networking section of Matthew Ball's analysis. My take is that the unbreakable nature of the speed of light will always hold us back from our ideal vision of the Metaverse. Or as I put it, "When it comes to the metaverse, Einstein is the ultimate cockblocker."
Longtime metaverse evangelist Ian "Epredator" Hughes proposed a cheeky solution:
This speed of light latency problem has me pondering quantum entanglement and instant changes at infinite distances.
Yes: Making the ultimate metaverse might require learning how to practically leverage quantum entanglement theory -- i.e. a breakthrough in quantum computing that's readily available for consumers. (Which, while I'm not an expert, seems decades away, if ever.)
Fortunately Epredator proposes some more practical solutions that could be implemented now:
[J]ust as we now have AI doing upscaling of images or choosing what it thinks it needs to render before it does some element of forward prediction helps with latency mitigation. Kind of like games do already if packets drop they predict your next position (not always accurately but it’s a start).
Metaverse content creator Lex4art does a deep dive on the latency challenge, along with some potential solutions:
[T]here are some significant tech improvements needed even for "far from ideal full of compromises single country scale not metaverse but just a decent virtual world", like somehow start to mass-producing "vacuum fibers" (to have full light speed medium that allow 40-60ms latency coverage at least for US (and servers should be located nearby the geographical center).
Math is also the problem - even very coarse real-world alike physics for clothing on all characters, fleshy soft-bodies and decent destruction is out of the possibilities for math-based processors. So, we need something that I can call "context based processors" - e.g. like current CPUs run x86_x64/ARM/etc command set to process mathematical context those "context processors" will run special, non-math context command set. But there are not so much breakthroughs on that horizon - neural networks dedicated CPUs still math-based monstrosities, only abridged to the core for NN math operations range...
But something interesting can appear even at current gen tech - some types of virtual fun (like slowly building stuff in Minecraft) didn't need much latency, just hide delays smart enough way from user when he interacts with world and this works well already. World-scale virtual world also maybe not worth it - simply because language barriers/culture barriers makes person-to-person interaction not that interesting and quite clumsy. And if there are super cool virtual art was created in one distant country - maybe it will be enough to simply copy it and bring it to all other countries' data-centers to share at least art with good latency & content downloading speed... will see.
Oh, and how can I forgot about "cherry on top of the metaverse cake" - networking model for that kind of project is server-side-does-most-of-the-stuff so we can have secure payments and content distribution, no cheaters and no trespassing in VIP/personal zones. This is how Second life, Sinespace and World of Tanks build - but this also means 2x latency. You hit the movement key -> this goes to the server and it calculates movement amount/permissions -> returns to you result so to animate the character locally using received data. So, very limited amount of active metaverse activities for that kind of connection but this is the only way to do things secure.
Latency values without stating a distance are prettymuch meaningless.Paper cups with a string are low latency.They need to state them to a point on the other side of the world. On a side note musicians struggle to stay in tune and in time with latencies over 4ms
Posted by: Judas | Tuesday, August 03, 2021 at 01:27 AM
I'd settle for a crappy-looking but reliable metaverse that had easier content-creation tools and an intuitive UI. One that would run on a student laptop across platforms (say, browser-based) or a tablet.
One where my fake car would not fly to pieces at sim-crossings, leaving me floating while holding a steering wheel, like a driver in some Loony Tunes episode.
That's all educators need, not a world requiring Blender to make content and a desktop gamer machine to run well.
Posted by: Iggy 1.0 | Wednesday, August 04, 2021 at 08:29 AM
I want to add a strong counterargument to one of Lex4art's claims.
Worldwide metaverses were EXACTLY what educators liked, during our heyday in SL from about 2007-2010. We held meetings with distant colleagues and saw their builds. Our students met online and multi-language work proved a solid educational application using SL voice.
In other cases, we worked fine around language barriers, too. SL translators have gotten better since then (eleven years ago!), something I've seen during my occasional foray in-world for a educators' meeting (a few of them carry on).
Lex4art is thinking in terms of gaming with shards in different regions, not changing the world (a promise Rosedale himself seems to have forgotten).
Posted by: Iggy 1.0 | Wednesday, August 04, 2021 at 06:50 PM
Sure, than more compromises applied - ditching decent graphics, fast paced player to player / player to world interactions, handpicked players from active student/teachers environment - than easier it goes on current gen tech & state of things. But this is very niche thing - can we call it metaverse after all those limits in play? All this was and still there in Second Life right now - but this kind of "metaverse 1.0" is not that attractive anymore and din't grow much even in pandemic environment - it looks very poor (childish graphics didn't match expectation from that mysterious "true metaverse"), laggy interactions with world/players is annoying as hell, slow downloading of every decent looking locations you visit are also didn't compatible with that "metaverse" thing and general population is not ready to deal with languages problems with each object with text on it's textures/audio message/person interaction in each country-specific location (this can be solved to a degree, but this is also a technical challenge to overbear).
Posted by: Lex4art | Thursday, August 05, 2021 at 01:52 AM
Ugh, my grammar still has a lot to wish for!
Anyway, it feels like even if true metaverse - universal-purpose virtual world for everyone? - is possible only if it's founded on set of precise compromises & smart technical solutions.
After dreaming a bit more about those solutions I can think only about purely cloud-computed Metaverse, where clients will receive only video stream. This is most flexible & secure solution:
- Anyone can join, hardware requirements are super low (fast enough internet connection + some kind of display + some kind of input to move/interact with metaverse). Also flexibility in monthly fee - than lower the requested rendering quality and lower video stream bitrate (lower traffic) - than less it cost for client, maybe even free connections will be possible for lowest quality. So, need quality - pay more, don't need it - pick less fancy solution that still gets job done.
- Metaverse cloud architecture allows another plot twist: lets split player activities between two different types of servers inside cloud: one that will process fast-paced (low latency demanding) activity and everything else put on low-latency servers far from client location. So, when client trying to move/interact with in-world UI (that <150ms rule from Google for not-annying interfaces) - his client sends movement/click data to nearest cloud that can verify them and perform player movement/UI interaction & return updated image in <50ms time. But when client trying to perform money transaction its done in different servers inside cloud, probably located in mother country with secure laws & stable political system that respects people & law *sigh*.
- Rendering architecture in cloud opens up interesting & unique possibilities to seize! For example, why not create just one huge set of clusters that will compute only lighting for whole metaverse and then only update it every second or so (also flexibility - if changes are too drastic this lighting update will happen with additional second or two, but still good enough - smart compromise; if cluster crushed - others can do his job with seconds delay). And this giant "photon cache" representation of whole metaverse can be simply buffered and requested as tiny pieces (matching current client location in metaverse) by all clients clusters, spread around the world (closer to clients to have low latency and having very wide bandwidth connection to rendering servers clusters). There are a lot of problems to solve here, but maybe something like that will be adequate solution to try.
- Custom content, created by clients, uploaded to cloud, split between fast/slow servers depending of content type and used like any other part of virtual world on demand - loading needed data between clusters in cloud is super fast due highway type of connections between servers (terabytes per second and more), so there will be almost no time to wait till location loading or something - all data there transferred between servers, cached and available on the fraction of the second.
So, in a nutshell "metaverse 2.0" maybe purely cloud-based thing on current gen tech.
Posted by: Lex4art | Thursday, August 05, 2021 at 03:13 AM
>>and everything else put on low-latency servers far from client location.
On high-latency servers or course *sighs*
Posted by: Lex4art | Thursday, August 05, 2021 at 03:42 AM