This is amazing. I really like that Angela Lansbury didn't make her character some bystander just because it was a tech-centered episode. She's like "I'm doing VR and I'm going to help out these programmers".
Like, the episode sounds ridiculous and strangely tech-phobic, but I just really like that she dodged the trope of "old lady skeptical of this new-fangled stuff" and instead she was really enthusiastic about it and not afraid to jump in.
Back when I was playing Ultima Online decades ago, when the kids in the chat would tell outrageously boastful Chuck Norris jokes, I would counter by telling even more outrageously boastful Angela Lansbury jokes.
Chuck Norris can't hold a candle to Angela Lansbury.
I was just out of high school then. I seriously thought we'd have great VR in 20 years.
Maybe 15 years later I checkout out Second Life. Kind of basic, but still seemed like the plan was on track.
Over a decade later VR games seemed to imply a near-future result we had all be hoping for.
But now it seems that the idea can't robustly scale till the internet is something stronger. Like, we can't yet have Fortnite that's 3000 players.
Could we create a system that intelligently applies one stylistic theme to all resolutions. To not aim for one visual experience, but ensure the scope. You could be in any size of space with any number of users, but the complexity of textures would depend on local storage and bandwidth. So it looks low-res on a mobile connection, but super 4k with all the files local and a super fast connection. Every resolution has a managed outcome. Like Minecraft view.
Rendering things (~textures) already happens entirely locally. It has never been any other way. Also, intelligently rendering things at different resolutions in different scenarios is ancient technology at this point [0].
Roughly speaking, the only thing that's happening over the internet in a multiplayer game is communicating information about players such as location and discrete actions performed. Small numbers about the state of the game, not graphics or textures or anything like that. When you move, the 2999 other players need to know where you moved to, or where you are now. This has to happen many times per second to create a believable simulation. Even just sharing things like "I moved 2 inches in this direction in the last 30ms" or "I pulled the trigger of my gun when pointing at this angle" between 3000 participants is extremely challenging. It is heavily affected by the particulars of the game design, too. A turn-based game has to communicate a lot less often. Games that can guarantee no interactions between certain segments of players can reduce the information as well. Many game designs require that all players have up-to-date information at all times, or that things don't happen out of order.
I read Carlo Rovelli's book The Order of Time about time, relativity, etc. One of the striking things - that I couldn't fully follow, frankly - was something along the lines of there's a limited cone of spacetime for which you can even tell what relative order things happen in. That there's not a valid concept of "simultaneity" for places far, far, far apart due to speed of light, time dilation, whatnot.
So: just design a game where your players are so far apart that the speed of light is the mechanism that guarantees no interactions between some players. ;)
But more relevantly, this seems to be one of those problems that's been hampered by the slowing of single-core performance. Though disk and memory and network keep getting faster, so I wonder what the absolute limit if a game really focused on processing single state globally above all else would be. Maybe even at high bandwidth the latency to thousands of players wouldn't be good enough to get all the updates out?
> just design a game where your players are so far apart that the speed of light is the mechanism that guarantees no interactions between some players. ;)
For a single person in VR, you can track 6 positions, 3 with orientations too, using about 36 bytes. Add some more state and let's say 50 bytes per update. If you update the hundred most visible people at 60fps, and the rest at 10fps, then you could have 3000 people walking around the same area with about 14Mbps of bandwidth.
That's quite doable!
Gunplay adds more data for bullets but also you probably don't need such detailed positioning, and you can get away with sending the positions of only the nearby players and bullets.
Your math is wrong and 6 positions and 3 orientations is not enough to communicate an entire player's skeleton. It's important to send skeletons that way each player only has to solve for bone positions and orientations for just themselves instead of for thousands of players. You will also want to send weights for blenishapes for face tracking. Another thing you left out was the audio data for voice chat.
4 bytes can give you 2mm resolution within a 2.5x3.5x3.5 meter box. 1250x1750x1750 < 2^32
4 bytes can also give you orientation within 1/6 of a degree. ~42000 square degrees on the surface of a circle x 360 for rotation x 6^3 < 2^32
This could be slightly more optimal but it's a good round baseline. Improving it is a matter of a handful of bits.
> It's important to send skeletons that way each player only has to solve for bone positions and orientations for just themselves instead of for thousands of players.
How long does it take to calculate? Modern CPUs have a lot of cores, and GPUs can do so much math.
> Another thing you left out was the audio data for voice chat.
I did, because you don't need to hear 3000 people. And the server could merge large groups of further-away people into single tracks if you really wanted it. A hundred Opus tracks can fit into 2-3Mbps.
It's incumbent upon you to show your work first, before demanding somebody else do it first. And if really helps if your math and understanding of bandwidth and latency isn't wrong, and even more if you can show you have some actual first hand experience in the field, like developing and shipping a working product.
I don't think your numbers add up either, and you're wildly underestimating the difficulty and complexity of the problem. Which real time multi player VR simulations have you shipped?
I'm happy to show you my work, an application called "Pantomime" that I developed and shipped 7 years ago, which runs smoothly over local WiFi, but terribly over long haul internet. My work that I'm showing you taught me that merely 2mm translational resolution and 1/6 degree rotational resolution would totally ruin the physics simulation and sense of immersion and realism:
Representing user input and object rotations and positions as precise floating point numbers is absolutely necessary when there is a physics simulation involved (see "good VR" prerequisite #3 below, "graphically and physically realistic worlds").
The "Butterfly Effect" explains why slight differences in initial conditions (and especially the high frame and precision with which the iPad can measure real time human input gestures, which can be meaningfully gentle and nuanced, like carefully balancing a spinning coin on an iPad, or gently pushing over a stone monolith -- see 2:00 in the video I linked to above) will cascade into enormous differences in later state.
The person giving the Pantomime demo is my colleague David Levitt, who used to work with Chuck Blanchard, Thomas Zimmerman, and Jaron Lanier at VPL, and whose definition of the three goals of "good" VR I've written about on HN before, over 8 years ago:
When Pantomime co-founder Levitt was a research scientist and product manager with VPL Research, the inventors of virtual reality, they had three prerequisites for a VR system:
1) a way to reach in, in 3D
2) shared reality — support for multiple users and viewpoints
3) graphically and physically realistic worlds
VPL offered a DataGlove to provide 3D input, while its flagship Reality Built for Two VR product offered networked multi-person worlds. Expensive graphics computers and custom hardware brought the full 1992 price to $500,000, which only a few huge corporations could afford.
When Dr. Levitt joined VPL, thanks to an amazing infrastructure by lead VPL engineer Chuck Blanchard, he added realistic gravity, collisions, and throwing a ball into the VR system for physical realism.
But two decades later, the public and technologists have become so impatient that the new systems calling themselves VR have punted even on the core original criteria.
Head-mounted systems like the Oculus Rift offer no way to reach in. In demos, visitors twiddle a 1980s style game joystick. And users don’t natively network — in an Oculus demonstration you don’t see the other users in the VR world — not even the other players sitting alongside you in the demo.
David Levitt: "I work in Virtual Reality, and everyone’s wondering what you can say about your acquisition of Oculus VR. In particular, I’ve had demos of it: I could look around but I couldn’t reach in. Do you have solutions for that that you can talk about?"
Jay Parikh: “You can’t interact with anything. These are big, hard problems … what you do with your hands, because you can’t do anything with your hands — or it’s hard to be using a controller when you can’t see your hands and you have the goggles on — these are problems we have to solve in a good and seamless way.”
> [My software] runs smoothly over local WiFi, but terribly over long haul internet.
I'm pretty sure that this is the core of your confusion. We're talking about software that handles ~3000 simultaneous participants. You're not going to find a userbase that large all running on a LAN (wired or not) outside of a convention hall or research facility. So, we're necessarily talking about networked software that users run on their (often abysmally designed) home networks and connect to other players over their (frequently terrible) Internet connections.
I expect that the tolerances you've specced out are absolutely correct for simulation of the local player. They may have even been reasonable for remote players on a LAN. However, those tolerances are overkill when you're designing a system that's designed for use by the general public on the greater Internet. This means that it _must_ do latency compensation for 100->500ms of potentially-highly-variable RTTs, deal with 1%+ packet loss, and also -if it has any even-vaguely-serious adversarial gaming components- have a server component that handles anti-cheat by way of player input validation.
There's certainly value in the high-fidelity VR software that your company wrote. However, the fact that it worked poorly over the Greater Internet suggests that you folks either didn't have experience with writing mass-market networked video games, or that your focus was exclusively on a high-fidelity simulator that would never have to paper over the messes that you get on the regular from the home Internet connections of your typical gamers. (As one example, because of the general shittiness and unpredictability of the Internet, it's fairly common practice in networked video games for all clients to run their physics simulation locally. Clients get notifications of events that will affect objects in the simulation, and then they slam those events into their local sim and play the results locally. For events that might matter to all players in the game, -say some physics object gets bounced around that might block pathing- the server will do a comparatively low-fi simulation of that and validate clients' attempts to move into or through the blocked area.)
> It's incumbent upon you to show your work first, before demanding somebody else do it first.
My math is right next to that demand!
> My work that I'm showing you taught me that merely 2mm translational resolution and 1/6 degree rotational resolution would totally ruin the physics simulation and sense of immersion and realism
That's a much much smaller workspace, so at that scale it would be a fraction of a millimeter. I don't see why the degrees aren't enough though; that tablet looks to me like there's a small amount of jitter but enough to make it not matter.
But if you want to 10x the precision of everything I said, that's only 11 more bytes.
> Representing user input and object rotations and positions as precise floating point numbers is absolutely necessary when there is a physics simulation involved (see "good VR" prerequisite #3 below, "graphically and physically realistic worlds").
I can understand that, but even before worrying about diminishing returns, you can only throw so many bits at a bad input:
> It was determined that the translational accuracy of the [Oculus Rift S] was 1.66 ± 0.74 mm for the head-mounted display and 4.36 ± 2.91 mm for the controller, and the rotational accuracy of the system was 0.34 ± 0.38° for the HMD and 1.13 ± 1.23° for the controller.
> Kreylos estimated the precision of Lighthouse tracking to be around RMS 1.5mm and the accuracy around RMS 1.9mm.
So you could fit the full precision of those devices into 36-40 bytes, and that's before we even consider that most people are only tracking 3 points, not 6. Even with some extra precision, I already tossed on some more bytes and it doesn't really matter in the end whether it's 14Mbps or 20Mbps.
> But two decades later, the public and technologists have become so impatient that the new systems calling themselves VR have punted even on the core original criteria.
> Head-mounted systems like the Oculus Rift offer no way to reach in. In demos, visitors twiddle a 1980s style game joystick. And users don’t natively network — in an Oculus demonstration you don’t see the other users in the VR world — not even the other players sitting alongside you in the demo.
That's a real shame, and worth keeping in mind, but I was talking about taking the kind of VR immersion that is currently widely deployed and adding more people.
>Bandwidth is the amount of data your connection can handle, whereas latency refers to delays that impact how quickly data gets to your devices. Even though they’re very different, they are related in one important way: If you have less bandwidth, you may have more latency. This is because if your internet connection can only transmit a certain amount of data per second, such as 5 megabits, files that are larger than that will take longer to get to your browser.
>Here’s another way of looking at it: While bandwidth affects latency, latency doesn’t affect bandwidth. Returning to the freeway example, the width of the road itself is like your bandwidth, and the cars are like the data that has to travel from one point to another. If the police set up a checkpoint on the freeway, it doesn’t matter how many lanes it has. It’s going to take longer for the cars to reach their destination.
>It’s the same with bandwidth and latency in your internet connection. No matter how many megabits your ISP allows you to download per second, if the data has to pause for a security inspection or take detours through multiple locations, you’re going to experience latency.
It's a continuous stream of data. If you want to conceptualize it as "files", then you can consider each "file" as either sub-kilobyte, or sub-frame-length. With the former, there is exactly zero change on latency, and with the latter there is a handful of milliseconds.
In other words, bandwidth only affects the portion of latency between receiving the first byte and receiving the last byte. In this situation that's so tiny it's negligible. The latency to the first byte is what matters, and that number is basically independent of the bandwidth.
> freeway paragraph, last paragraph
If the police set up a "checkpoint" then that is directly going to delay packets. Obviously that changes latency. That has nothing to do with bandwidth though. The same with "data has to pause for a security inspection or take detours through multiple locations". I'm not sure why you quoted this part.
But let's consider a freeway that goes down to one lane (or alternatively that we 10x the number of cars we have), which is a bandwidth change. Well, we're sending a fixed amount of cars per second down the freeway. If they can't fit into one lane, then the whole thing will immediately back up and we'll lose connection entirely. It doesn't affect latency, it drops the connection.
>Like, we can't yet have Fortnite that's 3000 players.
Interestingly EVE online (the only true MMORPG in IMHO) can have player battles that large. Typically they get TimeDilation™ applied, which means that the game actually slows down so the server can handle it on a given node. However, this would be utterly vomit inducing for VR.
That said, the main trade hub (Jita) always has about a thousand players on it at any given time. Though the vast majority of those will probably be docked, there can easily be a few hundred in system at once actively flying around.
I think a few hundred player VR game is plausible with today's tech.
Planetside 2 routinely does a couple hundred in a big fight, and recently did a server test putting a bit over a thousand in a single fight. A continent (an approximately 1km-radius circular collection of regions that you will itself (approximately a 1km radius circle subdivided into regions to capture) will regularly have a thousand people on it.
I'm not sure the issue is technical, I think it might be business. Doesn't seem to be that there's a non niche use case for it.
If you take the actual words Virtual Reality and think about it, we already have that. Alternative worlds where you do stuff like fight monsters. The social element of that is already well served with traditional screens, and you don't really need to be able to be in it 3D for it to be engaging, because other people have engaged us since the beginning of time. Specific worlds like WoW or EVE are also popular because they're specific, one is fantasy, the other is sci fi. An unspecific world that kind of just looks like reality, what is it even? Who will go there?
Something like surgery that is on all the Meta adverts, maybe. I'm not a surgeon so I can't tell for sure, but it would seem to me that an already high value activity like that has already found all the educational materials it needs to let surgeons learn and practice their craft. A VR version world be nifty but how much is the value added over whatever surgeons are currently doing? So it might come but it will come because it gets cheaper due to general investment, not because the doctors really need it.
Check out planetside 2, it's a fully mature game that effectively pulls this off. Thousands of people on a single massive map with individual bullets simulated as physics objects. Not sure about mobile connections, though, ping is fundamentally important.
For some more horribly bad VR fun, see the X-Files episode "First Person Shooter". Yikes. I'm ashamed of my year 2000 self because I'm sure I thought it was so awesome!
Murder She Wrote was well ahead of the times - I actually remember watching this episode back in the day, and it prompted all sorts of questions back then. Like some people commenting here, I thought back then we'd have fully immersive haptic VR in 10 years or so. Things I didn't think we'd have is nearly HD virtual screens in VR, in my head things would have been still blocky but interesting. I'm not sure if I've become more pessimestic, but I can't see the Metaverse thing coming out in 10 years from now!
If we can have this and the animated gif where the woman shoots the guy wearing VR goggles, that would be a complete kit for making fun of Meta and Zuck.
Me too, I spent a month in hospital last year and the fact that 4 different channels showed Murder She Wrote at different times during the day definitely helped get me through it.
This was one of the episodes I remember more clearly and I remember it was pretty good although it's fair to say I was on a lot of drugs so my review is probably not entirely trustworthy.
Did you see the one where she stumbles upon the murder of the very last resident of cabot cove, and she is forced into the realization that she is in fact the most prolific serial killer in history.
Is there any VR that works decently for people with glasses? I tried VR for the first time last week and it was quite annoying for me, both for how the glasses were handled and some blurry vision from time to time. I'd never try it again by choice.
I use a Quest 2, and use a 3D printed prescription lens adapter. It does a great job - you forget they’re even there when they’re on. I have an astigmatism, so a simple diopter adjustment wouldn’t be enough for me anyhow.
The lenses were cheap - the design is around a cheap pair of glasses from one of the online stores. While it’s been a long time since I put mine together, I want to say I have <$20 in the prescription lens adapters.
I used to wear glasses inside my Samsung Odyssey HMD until they started scratching one of the lenses (luckily I noticed before it became visible while using the headset, it was fine for a few months though) then ordered a set of prescription clip-on lenses. The clip-on lenses are great.
I have a Quest 2 and it fits pretty good. You can adjust the width of the lenses to match your eyes. The Quest 2 also comes with a spacer specifically for people with glasses which spaces the headset away from your face. If you were using someone else's VR who doesn't wear glasses this spacer is almost certainly tucked in a drawer or was thrown away with the box. These factors are easily overlooked and possibly a contributing factor to your poor experience. Also, if you're going to buy a VR headset in general; consider the cable, head straps, and face pads to be consumable. They break all the time and there's no good way to clean the face pads.
I got dedicated lenses for my Quest 2 so I don’t need to use the spacer. Much more comfortable than wearing glasses also, and only like 60 bucks to buy online.
There are a couple of headsets that allow you to adjust the diopter of the lenses so you don't need to wear glasses, e.g. Vive Flow or HuaweiVR. Both of them however have numerous other big issues, so I really would not recommend them.
PanasonicVR/MeganeX is an upcoming one that has the same ability and that might be a descent headset otherwise, however it's going to be expensive (~$1300 for the full kit).
The cheaper alternative are prescription lens inserts, they get put on top of the existing lenses and replace the glasses.
The VFX-1 was based on camcorder viewfinders micro-displays, those in turn where limited by PAL/NTSC resolution. This was not a problem when the VFX-1 was released, as most games where running in 320x200 anyway.
As time went on, monitors and games got a lot better and VR headset didn't, due to not having any better displays available on the market. The consumer VR market itself was never anywhere near big enough to incentivise the creation of custom VR displays.
There were still headset in the following years, such as the Eye-Trek FMD, but they went away from VR and towards just being a wearable screen. They didn't have any tracking or controller.
Even the modern VR revival only happened because smartphones got good enough and provided and alternative for the old micro-displays. Which in turn also allowed a much bigger field-of-view, which helps a lot with the immersion. A big field-of-view was also something that was impossible with the old micro-displays, as you need a physically big screen to get a big field-of-view, those micro-displays however were tiny and limited to 30-60°, modern VR has around 100°.
On top of that comes just the surrounding technology, when VFX-1 came out, most games where not even rendered in real 3D, just pseudo-3D. That means you couldn't even make full use of what the VR hardware was capable of. Without having position-tracked controller, VR was also fundamentally limited to being little more than 3D screen with a bit of headtracking. It didn't look bad, but it just wasn't comfortable to replace a monitor.
Also worth pointing out that nothing of his has fundamentally changed. Modern VR has been around for 10 years at this point, and is still struggling to find any mass market acceptance. VR support in normal games is just as bad, if not worse, than it was back in VFX-1 days. Meanwhile most native VR games are just small scale indie stuff. The tracking and controls got a lot better, but the screens are still far away from competing or replacing an actual monitor.
At this point I am reasonably convinced that VR won't take off until it can fully replace a monitor, meaning resolution has to be at least double of what it is now. The motion-gaming that VR can do, certainly has an audience, but that doesn't look to be enough to get any wide adoption. There is also nobody left to really do high-budget VR content, Sony might still have a shot, but Meta seems to have given up and everybody else left the VR space for good.
> He goes on to describe the murder as the sort of thing that would happen "up on the Embarcadero in Frisco by the wise guys," a phrase no one from San Francisco has ever said.
If I remember right there’s an Anthony Jeselnik standup special on Netflix where he asks an audience member where they’re from and the guy replies “frisco” and there’s a loud audible moan from the rest of the audience so Jeselnik starts lampooning him for it.
Like, the episode sounds ridiculous and strangely tech-phobic, but I just really like that she dodged the trope of "old lady skeptical of this new-fangled stuff" and instead she was really enthusiastic about it and not afraid to jump in.