The XR gap is in content, not developers, and not tech. Any VR play should be throwing money at creatives, not devs or APIs or frameworks.
The lack of any outside visibility is a huge problem for families. It is anti-social. Small kids can't be near you while you are using VR. Pets can't really be near you.
It's very difficult to share your experience with other people. VR streamers aren't very popular on Twitch. They probably have issues interacting with their audience since they can't easily see Twitch chat I would think.
I also question the accessibility of VR. At what severity of physical impairment does VR become unusable?
There are just so many problems with VR as a mass-consumer product that don't have anything to do with technology.
AR doesn't have many of these problems, because you can still see other people.
Google's direction makes sense.
I've been visiting my friends who have VR, and organised VR mini-parties at my place. The setup goes like this: you show what's in VR on a tv screen, one person plays, and others watch, taking turns. Works extremely well with Beat Saber, FNAF, and some shooters.
I also did social painting in VR, where people took turns drawing in Tilt Brush, and others went in to look at it, adding to it their own stuff.
Another case I saw is adding VR setup to a bigger party, which works very well with Beat Saber.
VR can be just as social as couch-multiplayer on consoles, or Karaoke. If not more.
As for pets, it depends on a pet, but with 2-3 dogs I saw there were no issues whatsoever.
The first is like the one GP made - absolutely sure of themselves and staying with certainty that there are intractable problems with something. On closer inspection you find that they have no personal experience but they’re really smart and can imagine the worst consequences of everything well.
The second is the one I’m replying to. They have first hand experience and give a more balanced take on the situation.
I’m happy that we get to read both kinds of comments.
> The lack of any outside visibility is a huge problem for families. It is anti-social.
Come on. This way you could say most of gaming is antisocial, because a lot of interesting games (most of them, to me) are single-player experience. There are many people, people with families, for whom playing games (or watching movies for that matter) is about games (and movies), and not about an excuse to be with other people.
Also, you can always stream the game to another device for other family member(s) to watch.
> Small kids can't be near you while you are using VR.
I guess it depends on how old they are, but my wife and I divide forces: one of us watches the kid, the other plays, then we switch.
> Pets can't really be near you.
My pet doesn't mind. It understands that the person standing there in the middle of the room (which is weird behavior) with a box on their head is ignoring it, and making sudden moves.
> It's very difficult to share your experience with other people.
Experience? Not really. You just stream the game. Interacting with other people while playing, Twitch-like? That's more difficult, so the showbusiness part of the gaming market has a problem, but this doesn't affect people who want to actually play a game.
> I also question the accessibility of VR. At what severity of physical impairment does VR become unusable?
VR requires sight, and some control over hands; depending on the game, this may expand into total motor control. The requirements for most games is fundamentally the same as for "regular" games you play with mouse+keyboard / gamepad.
Working AR will obviously have more total utility than VR, so it's good to pursue it. But VR exists here and now, while AR is still science-fiction (HoloLens beginning to scratch the surface, and no, overlaying stuff over image of the world captured by a camera doesn't count as proper AR). However, VR can be used to simulate, experiment with and prototype future AR applications, so it's weird to just abandon it.
From a technology standpoint, the Holy Grail device will be able to do both VR or AR selectably and seamlessly. The reason I think so many extant AR apps are just crappy VR apps is because the devices we have aren't able to blend VR and AR, the developers wanted (or were mandated) to use a specific system, and then shoehorned a specific design to the system they had.
> no, overlaying stuff over image of the world captured by a camera doesn't count as proper AR
News to me. If that is the case, then how is the HoloLens AR? Without utilization of the spatial surface recognition system, conceptually the HoloLens is nothing more than "overlaying stuff on an image of the world". That the world is viewed through plastic rather than routed through silicon doesn't make a difference.
Add surface capture to a VR headset with a pass-through camera (which can be done today with off-the-shelf parts, it's just not in a commercial product yet) and I see no difference.
It doesn't capture the world through the camera and re-render it on the screen.
that's why location based experiences (LBE) is the solution. Different monetization route, it's basically a theme park model (invest in cutting edge tech/high value content) and maximize throughput like a VR "ride."
Having gone through a number of the experiences I'd highly recommend them. Better than anything you can get at home, it's a fun outing with your friends, and the content is pretty awesome.
No we're not.
I never had the slightest interest in 3D TV. VR was compelling enough that it made me change my main focus towards learning interactive 3D.
There were a lot of super gimmicky apps (remember the virtual beer app?), games whose sole reason for existence was that they could use the accelerometer (as a crappy steering wheel, etc.). Remember the shake gesture?
Granted this is far from a perfect comparison, but my point is that the content and usage patterns of smartphones have evolved, often in a different direction than people imagined 10 years ago.
Another example, early YouTube content was also very different from today's YouTube content, etc.
If VR is still around 10 years from now, I expect the use cases, UI patterns, etc. will be quite different from the vision that most people have today.
I know several parents and dog owners who have bought Oculus Quest after trying it out and they don't seem to have a problem with this. I guess they just close the door or play after bedtime.
> They probably have issues interacting with their audience since they can't easily see Twitch chat I would think.
All VR Twitch streamers use some kind of plugin that shows the chat either as a floating panel (in Beat Saber) or as a palette you can bring up Tilt Brush style (in VRChat).
Let me start by saying that I was a big VR skeptic until the Oculus Quest. I too thought that the outside visibility thing would be a base advantage that AR would just have over VR forever-but- surprisingly- the passthrough camera view  is more than good enough for several of the interactions/views you'd want to see outside of the device. You can see others, their reactions, and interact, all while having the device on your head. Soon the passthrough will also be triggerable at a moments notice too.
I agree. It reminds me of the 3D TV hype. Yes, they exist, and yes, some people use them, but it's still marginal.
On an unrelated node, I still can't believe MS killed Kinect. It is fun, it is social, it's safe if you're careful, and you can be very free with your movements if you have enough space. You can both play games and get healthier. I bought most Kinect titles ever released and it kills me there will be no more.
Unity's default stuff is so bad for VR and I've replaced so much of it in the last three years that I'm pretty close to having my own game engine. In another year, I could be completely off of Unity and running on just Xamarin.
As for motion sickness; with modern headsets that's generally only a problem for games which move the "camera" in ways which don't match your movement in the real world. (Such as racing games.) There's plenty of content available these days which doesn't have that problem.
Space restrictions I agree are an issue. There are a lot of games which only really need standing room to work well, but even then it can be hard to find enough space close to your PC (since the area you're standing in needs to be completely free of obstacles you could hit with your hands). I think standalone VR partially solves this problem though by increasing the number of areas you can set up in. With Quest, for example, the area you're playing in doesn't have to be near your PC; that makes it way easier to find a space that's big enough.
Who does this work for? It doesn't for people with spouses and children. I can't be removed from the real world when my child wants my attention. How many hours of game play do you think the average family person might have available to them where they can just remove themselves from the real world?
Given all of VR's problems, the people who will play VR games seriously enough to spend money on AAA VR games will always be a smaller subset of the overall gaming population. It's never going to be bigger than what exists already. All these billions dumped into VR as the next big thing seem to miss that the VR market is limited by the problems the parent comment mentioned.
You could also ascribe the same issues for a parent who wanted to leave the house. In fact more so, since if I left the house with a small child, I can't instantly return if I become aware of some urgent situation. Yet, bars, pool halls, bowling alleys, movie theatres, restaurants, etc., etc., still seem to have survived this seemingly momenumental issue you are harping on about.
Couldn't you make the same argument against headphones in general? Those seem popular enough. I've even known people who used sleep masks and ear plugs when going to bed. Realistically, VR doesn't actually take you anywhere. You're still standing right there, and if a child or spouse needs your attention they can still raise their voice or even just grab you to get it. I don't think the immersiveness itself is much of a problem for most people, although there are certainly times and situations where you wouldn't want that because you need to be more aware of your surroundings.
It works perfectly well, to the extent that any single-player activity works with spouses and children. Source: I have a spouse, a child, and a VR headset.
I mentioned it in another comment, but this is what LBE (location based experiences) is all about. You're not just looking at a "game" you play, but a confined piece of content that you experience, like a game, like a film, like a TV episode, but none of the above all the same. It's a completely different medium for telling a story, creating
There's a lot of really cool, niche things out there right now - creators are exploring new ways to create art that maximizes XR's ability to remove a person from their environment (wholly or in part) and uses that combined with audio and video technology to create an interactive, immersive experience as a piece of art and/or storytelling.
But the issue to me is this: if you look at the history of technology and art or multimedia, tech has almost always chased the artists' vision. And this is the problem I see with all of Google's plays into XR. They're not saying to artists: "imagine you had a medium that allowed to remove a person from their environment, construct a new one around them, immerse them in it, and interact with your vision. What would you make? What do you need?"
If you want examples of this: Walt Disney's animation studio pioneered many animation techniques and technologies. That didn't come from engineers experimenting; it came from animators wanting to create something they couldn't, and engineers figured out what to build to realize their vision. Game engines have never added features for the sake of features, they are demanded by game designers to add something new to present a new experience to the player. Same goes for paint brushes, instruments, etc etc.
And that's why I feel a lot of VR plays struggle. They're engineers putting the cart before the horse. Throw money at creators, and figure out what they need to create.
It's also why some of the best VR I've experienced has come from LBE. It's content first, figure out the platform second. Monetization third (which is interesting, since it's a theme park ride model).
If there are two things that are absolutely certain in media: if the content is compelling, the technical side is secondary. And if the content is compelling, it can be monetized. All these VR plays are screwing up by focusing on tech and money over content, which is what actually drives people to spend their money and use the tech.
Oculus Quest has managed to solve both of them, even though they do their best not to exploit it. Because it's an untenthered headset, you have to designate a "play area" for safety - and to do that, you get to see the world through headset's cameras, and paint it with your controllers! Even though the view is monochrome and grainy, you can see enough to walk around and do tasks. I've frequently interrupted my play to do something, without taking the headset off - as soon as you walk out of the designated play area, you get the camera feed back.
Currently, Oculus is using this view only as a safety feature, and AFAIK you can't access it as an app developer. Moreover, using the headset outside is discouraged (perhaps because direct sun focused on the optics could burn holes in the screen). But Quest is essentially a working proof of concept, showing that the possibilities are there.
 - Apparently they're used for controller tracking.
It should be noted that the Quest runs the Snapdragon 835 SoC, the same SoC in the Go and every other standalone headset you can actually acquire right now (specifically, Lenovo Mirage Solo and Vive Focus). It's the same SoC as the Google Pixel 2. In other words, it's old. It's why Oculus had to drop the framerate from 75Hz on the Go to 72Hz on the Quest: the added overhead of the camera tracking and higher resolution display left too little frame budget for applications.
I will immediately buy the first headset with an 855 in it. Not only will the raw performance be substantially better, but it has custom silicon in it specifically for doing image-based tracking, as well as processing higher resolution video feeds. Quest is just a preview of what is to come. I'm pretty sure we're going to get the first, decent pass-through camera modes in the next gen of mobile VR headsets.
There's also the thrill of rhythm and sports games, though those aren't much different than their non-VR counterparts.
The external cameras turn on in the quest when I need them because I’ve gone out of bounds
The guardian helps me stay inbounds in my small space
The latency and amazing head tracking have removed all motion sickness
But it’s still just a beat saber machine. There haven’t been any major titles released in months, and it’s unclear if there will be.
We need Nintendo-style first party titles and some killer apps.
AR may be a different beast. I don't usually hear people describe it as uncomfortable, mostly just lacking power and content. I also think the market for AR would be much bigger than VR. I think that is why Apple, MSFT, and Google are focusing on it.
Wheeeeere did you read that?! I've been in the VR industry for the last 6 years and everyone I know vehemently recommends against doing that.
Actually I've found the experience somewhat magical, in that I get the same stress relief from being couped up all day by leaving the house to go do something outside as just playing a game for an hour or two. It feels like I'm actually going to another place, nowadays.
This really only started when I got the high end, Valve Index, which has a 90hz+ refresh rate, so maybe that has something to do with it.
That said, being on a boat for hours is hardly something we were evolutionarily designed for either, yet no one writes scare mongering articles and posts about that. So you might be doing a disservice with all this worry you're stirring up.
And again, motion sickness is by far the least of your worries. My point was "don't just assume nausea is motion sickness". There are other defects that can cause nausea that are not adaptable.
I think you're wrong about that. Any proof?
Very, very few actual developers in those boards.
"Getting your VR legs" is definitely a real thing.
More concretely, there are a number of different artifacts of current VR systems that will cause different side effects, depending on how long you expose yourself to them. Unfortunately, the common vernacular lumps all of those issues under the title of "simulator sickness". While simulator sickness is a well-known problem, it is by far not the only one, and if you're experiencing nausea or discomfort during a VR experience, you should not just assume it's "simulator sickness" and try to "power through".
IPD mismatch is your eye telling you things are at a different distance from where they feel when grabbing or traveling to them. In AR, this can make 3D objects look like they are swimming, yet also remaining stationary, which is disconcerting but ultimately not a big deal. In VR, it can make you feel kind of dizzy, a bit like you're swimming, because the effect is very similar to viewing things through water, or a glass of high refractive index. If you've ever had to switch between contacts and thick glasses, then yes, it is trainable.
Some people find bad depth cues to be nausea inducing. There is evidence that the brain weights different depth cues more or less strongly depending on one's overall hormonal balance. People with higher testosterone levels judge depth through motion. People with higher estrogen levels judge depth through relative object size and shadowing. This isn't just "men vs. women" as there are any number of reasons your endocrine system could be off the normal distribution, but it is why a lot of women feel very uneasy immediately upon putting on a headset and why men tend to not like monoscopic 360 content. The effect is so strong for some women that I've never seen anyone even attempt to power through it. The best solution is to stop using that particular app and start using one with better graphics.
But most importantly, low framerate can cause a violent, persistent nausea and headaches, within seconds. "simulator sickness" on the other hand, takes extended exposure to develop. Higher framerates are always better, but 75Hz seems to cover about ~85% of people, 90Hz seems to cover ~95% of people A VR experience, and 120Hz covers ~99%, which at that point is actually better numbers than people going to the movies watching action movies. This is one of the reasons I found it so disappointing that the Oculus Quest was set to 72Hz; we should be going up, not down. If the application you are using is not running a very high framerate, you're in for an extremely bad time, with it only getting worse the more you expose yourself to it. You absolutely must not try to power through low framerate.
So this is why I say "don't try to power through sim sickness". It might not be sim sickness. Take breaks whenever you feel bad. Whatever "VR legs" are, if it's just the proprioception or IPD mismatch, you'll eventually get there even if you don't "power through". But there are plenty of other issues that you should definitely not try to "power through".
Vive is far too heavy and Rift has issues with lens fog (so I can't bloody see anything a minute after putting it on my head)
I'm sorry, can someone explain what a creative is and why money should be thrown at them? If they're not developers than how are they going to create content? Also if there aren't good frameworks and APIs the barrier for creating content is much higher.
Nope, Nokia 3230 (Symbian 7.0) was first with such functions - it has exclusive AR-game «Agent V» with depth sensing capabilities.
It's entirely plausible that AR ends up being a dead end. It's 5 years later and still not really a big market.
Headsets, I think the current devices are just a preview. I'm skeptical that transparent displays are strictly necessary for AR headsets, and I have seen some industry speculation that the HoloLens 2 is probably at the limit of what can be done with waveguide displays. Unfortunately, waveguide displays are just about the best sort of transparent display tech available for this particular use case, so we need to either shift to pass-through-camera displays or invent a whole new display technology.
I'm betting more on pass-through. I think the tech is already available, what is at issue is decades of software cruft in modern operating systems making low latency through put too difficult to achieve.
I am guessing there are uses for it in industry, although I'm not sure where it would find the best product-market fit.
A futuristic AR device that would work seamlessly would truly "disappear" into the environment. No HUD necessarily (at least in terms of what Google Glass was trying to accomplish) but imagine if people, places and things could register actions in the real world, sort of like "Press X to open door" in a video game.
Imagine a sort of DNS-type registry for registering ownership of physical space tied to geospatial mapping data. When you walk into such a space, your AR device could unobtrusively indicate that there is something interactive where you are, and who owns it/what it is.
Perhaps your city owns public basketball courts. They could register a virtual fence around it and when you're on the court you're presented with the option to start basketball stats tracking and a tutorial with a pro from the city's NBA team.
Or something more commercial could be: you walk into Starbucks and your drink is ordered for you automatically and you can see the estimated wait time.
These aren't some sort of experience designed for entertainment value, they're truly small improvements which augment your life. Your choice whether or not you think such a thing would be necessary or good for you, but a lot of it already exists with phones. AR used this way would simply make phones obsolete.
Obviously, the processing power wasn't up to the task. It was a bit like trying to go back to an 800x600 CRT for work. But the HoloLens is a 3.5 year old AR headset by now. What I did get out of it was a glimpse into what "could" be, for some value of "AR headset".
First of all, I wasn't tied to a table. These days I don't have a desktop anymore, but even my laptop is basically unusable without a table, especially for the work I do, where I basically require the use of a mouse.
Second, I wasn't limited to my one display. I could put windows anywhere. If you take the virtual desktop concept, and throw it away, scoffing at is as a child's toy, you might get the idea. I kept Twitter hanging out on the front of my refrigerator. I only checked it when I got up to take a drink. I kept my email sitting over to the right of me. It was never directly in my field of view, close enough to be convenient to check, and never hidden behind other windows or minimized away to require hunting for it in an ALT+Tab view.
Third, the system remembered the location of all of my apps between sessions. They became more like appliances in my mind than windows on a screen. The physicality of it, and the recall of it, surprisingly made it more enjoyable to use.
Fourth, the voice assistant was significantly easier to use, now that it was running from a microphone sitting right next to my face, rather than one sitting right next to my GPU fans. I found I started voice commands for basic tasks like opening and closing different applications.
Finally, when I was done, I could just take it off. There is no omnipresent display to wake up and show you a notification icon. There's no need to keep a dedicated space for it. I started viewing my desktop like a garden plow: a big piece of equipment that needed space (which was certainly at a premium for me at that time) and required a lot of manhandling to use well.
Application-wise, I think it's hard to really describe what is "good" about AR. Either you talk about existing apps, in which case people say "yeah, but I can do that on my computer now". Or you talk about potential apps, which is too abstract for people. Try going back to the early 80s and explain to an MS-DOS power user that they'll be using exclusively GUI systems in less than 20 years.
Because of the way it blends into life, I think AR has the potential to improve the overall computing experience for anyone who already uses computers. The potential is to replace your laptop AND your smartphone, not to be a device in addition to them.
That's what John Carmack now says is the argument for VR. It's not about virtual reality. It's about virtual screens. Now, you can have a wide-screen theater experience even if you're in some tiny apartment in Guangdong or San Francisco. And hang out with your friends, even. Think Sunday Night Football, not Doom VR.
VR requires applications that can't be done on flat-screens, just as movies require content that can't be done in books and radio requires content that can't be done on MP3 downloads and TV requires content that can't be done in a movie theater.
For example, apps like Tilt Brush can't be done on a flat-screen. Yes, you can create 3D content on a flat-screen, but Tilt Brush is not just a 3D content generation tool. It's more of a 3D painting tool, a creative release rather than a productivity tool, and it's a social network of content from other people that you can explore. On a flat-screen, it just wouldn't have the same usability or impact.
There is a VR Digital Audio Workstation-like app called SoundStage that--while doesn't have the usability of Tilt Brush--does signal some interesting concepts for a potential Tilt-Brush-like DAW.
I originally didn't think Google Earth in VR was a good use of VR, but since starting to work in the language training industry, it has been a surprising source of content for us to improve language training for adults. As an app on its own, I'm not sure it makes a whole lot of sense past a basic demo, but the content and form factor are very useful if one can apply a purpose to it other than just exploration. (Though the exploration can be fun for a short time, too).
I think the use of action-oriented VR titles for exercise cannot be understated. It's pretty much the only form of exercise I have any patience for, anymore (between children, job, volunteer work, side projects, high-cost-of-living in the DC metro area, and a sports injury or two). Obesity is an epidemic. Mass adoption of VR could fix that.
As an addendum to that previous point, I also really enjoy VR meditation apps.
VR could (and in some offices, is) revolutionize teleconferencing. I've had meetings in AltSpace VR that, even 5 years ago, felt better than using Skype for remote meetings.
Right now, we have a very high cliff to get to be able to create content for VR. I think it's a very similar issue to when GUI operating systems first started to come out. If all you knew were CLIs, it was very hard to see the utility of the GUI. And when people first started adopting them, they frequently had to dump out to the CLI to do things. Less than 20 years later, the CLI was relegated to niche use. I don't think the vast majority of people could have anticipated the shear number of apps we use, the mass of files we manage, the ubiquity of computing in our lives, in the mid 1980s. But all of those systems that make that possible weren't a part of those first GUIs.
Most of the examples you give though emphasize novelty over utility to me. I agree the Tilt Brush is neat and I think that absolutely has an application, but I think it's similar to MS Paint. MS Paint was a revolutionary program iconic with Windows, but it was a feature attached to the actual product of Windows. Not many people bought Windows just to use MS Paint.
I am still failing to see the actual product case for VR, outside of niche circumstances. Most early adopters I know in the gaming space have largely abandoned it. So far, from an adoption perspective, VR seems like a less successful device than the MS Kinect. The Kinect wildly outsold VR headsets (talking an order of magnitude difference) in the beginning, but then people just sorta stopped using it because they felt it was largely a gimmick.
Most of the stuff you mention is all about potential, and it's wonderful to have believers such as yourself because you are the guys that push to create an industry. But right now VR is still in its infant stages, and personally I feel like it's been around long enough that we'd have some idea if it were going to take off.
I really do hope you're right and I'm wrong and there is a really cool technological medium being developed that will have a great impact on most (or even many) people's lives...but I don't currently believe it.
It's been there for decades. I tried Jaron Lanier's original VR rig in the 1980s. Autodesk's early one around 1990. W Industries somewhere in there. HTC Vibe and Microsoft HoloLens last year. It's fun for a while, but not compelling for long periods.
Except for Beat Saber. That uses VR right. You get to move your body, but you stay locked to the real world reference frame, so it doesn't induce nausea.
Valve promised to release 3 big titles by the end of the year, and it's November with no news yet.
The problem I see is that VR adoption is so low there's little incentive for developers to target for it since the development is so expensive. And this in turn means fewer people are excited to buy VR - it's the chicken and egg problem. Facebook was supposed to solve this by throwing gobs of money at studios, but considering how much money they spent that was an absolute failure.
The reason VR isn't taking off is because most people don't find VR worthwhile. What it would take to make it more worthwhile - better technology, better price, better content. I think the tech is 2-3 years away from being good enough for even early adopters. So far people that use VR are mostly hackers, developers, tinkerers - not many of them are really consumers first. I think the price is 5-6 years away from allowing it to transition to mainstream. But the content? Honestly this was what I most excited about years ago, but now I'm beginning to think VR is just a dead end. Yes, there's some cool stuff - but very little of it isn't a gimmick.
Did you miss the recent Five Tilt Kikstarter? It is a new display technology, that I'm eager to try next summer if they manage to deliver. It needs a reflective screen somewhere in the room, so it won't work as mobile; but for anything room based (games, teaching and office stuff) it gets a lot of bullent points right, and I think it could replace a lot of use cases for VR as well.
What kind of elitist nonsense is this? The reason AR isn't taking off isn't because the users aren't good enough for AR, it's because AR isn't good enough for the users.
> It takes time for new technologies to become widely adopted and utilized.
My point is some technologies just end up not being very useful, ever.
Could you expand on this, please? I didn't realise this was a significant concern until now.
Apple on the other hand focused on AR that works with normal phone cameras, allowing all their devices to get ARKit support, which is what Google ended up doing too. They dropped Tango for ARCore, which runs on any device with a normal camera on it.
Specialized hardware will always lose, because it's stuck in the content -> hardware -> content vicious cycle. No one buys it because there's no content, and no one makes content because no one has the device.
There's rampant speculation that Apple will deliver a glasses experience where Google already failed. I think this is possibly the last chance for AR to get traction before going back to the shelf for another 20 years.
And it doesn't require those weird-looking and weird-feeling controllers to interact with the world.
So... maybe VR is not yet there, because the hardware does not allow for the full immersion that we unconsciously expect/want from a virtual reality?
You can either keep your experience confined to the user's play boundary. You can teleport people to new locations. You can let people walk to the edge of their boundary and use a gesture of some kind (e.g. flicking one of their thumbsticks on their controller) to turn them around to do "redirected walking". You can provide a "strong stomach" mode to walk at constant velocity in the direction the person is looking. Or you can stick the user in a "vehicle" with clear visual cues that they are in said vehicle. And that's it. There is nothing else you can do.
For example, Doom VFR only has teleportation movement and it results in an experience that doesn't feel very Doomlike. Instead of blitzing around like a rabbit with a gun, now Doomguy is a magical genie with a gun.
To start, I think it's a mistake to approach VR application design as "this app, but now VR and immersive", whatever the app may be. It might get better in the future, but for now there are too many compromises for the user to start your design process in that way. VR requires a bottom-up rethink of what you're creating. If all you are creating is "X but in VR", then you're not actually adding any value for the user, while simultaneously asking them to commit to the economic, ergonomic, and context-switching costs of entering into VR.
I am extremely skeptical that the first-person-shooter will ever be successfully ported to VR in such a way that it will continue to be of the same mainstream interest that it currently has. I see it as a similar issue to how the Adventure genre went from being the one of the most popular to an extreme niche with the advent of 3D graphics.
I think FPSes are popular on flat-screens because they specifically aren't realistic. Real combat is the most grueling thing one can do, basically by definition, because if you don't try harder than the other guy, you die, so you have to try your hardest and hope it's enough. The mouse-and-keyboard or gamepad system with a stationary screen completely removes that issue. And while I enjoy FPSes on flat-screens, I've personally found the experience of simulating killing people in real first person view to be psychologically disturbing.
I play FPSes strictly single-player, as I can't stand the other people invariably get paired with in public multiplayer, which I think we all know is a very common problem. If VR could kill off the FPS genre and cut out a huge swath of the most problematic portion of the gaming "community" (this issue is why I always weasel quote the word "community" when referring to gaming), then I think we'd all be the better for it. You don't see a lot of GamerGater's complaining about "SJWs" in the context of strategy game releases.
So there are very significant technical and ethical reasons why I don't think we should even try to be porting FPSes to VR, and instead focus on revisiting old genres and discovering new ones.
There is quite a few popular milsim FPSes which kind of favor realism (to a reasonable extent), both in in-game physics (weapon, vehicle and player body behavior simulation) and visual depiction.
I think FPS is popular because this is one of the fundamental genres out there: a depiction of a combat, as perceived by a combatant. This... err... trope has been there since the dawn of men, people imagining themselves fighting. The advent of 3D graphics brought that to computer gaming.
Adventure is another fundamental genre, and I'd disagree that it's niche. See below.
> [...] basically by definition, because if you don't try harder than the other guy, you die, so you have to try your hardest and hope it's enough
I'm sorry but I just don't understand this argument. This applies to every competitive game out there, no matter the genre. If game has this concept when the best performing player wins and other player(s) lose - you have to do your best.
> I've personally found the experience of simulating killing people in real first person view to be psychologically disturbing.
That's a problem unrelated to VR. Although it's somehow aggravated by it, as VR strives for more immersive experience, thus, more realistic depictions of in-game events.
I wonder if you should compare this to airsoft games. They provide a realistic immersive environment (in my perception, immersiveness the primary promise of VR), and they're sort of FPS-in-real-world stuff.
> stand the other people invariably get paired with in public multiplayer
First, I believe this has nothing to do with game presentation layer (like VR or even 3D at all) or genre. Second, there is an awesome solution for this: playing with friends.
As for the genre - I assure you, you can find less than enjoyable opponents in any game genre. Especially strategy games (MOBA/ARTS subgenre, specifically, hehe), but not limited to that.
> If VR could kill off the FPS genre
I don't think that's possible. Again, the advent of 3D graphics brought the genre to computer games, and you can't put the genie back in the bottle. VR - if it becomes mainstream - could transform the genre, but can't kill it. The fundamental idea of a simulated combat as perceived by a combatant is way too high-level concept.
Also, I'd disagree that your "adventure used to be one of the most popular and now it's niche" statement. Adventure genre concepts are still as popular as they always were. Storytelling, exploration, and puzzle solving are key elements of a lot of mainstream games out there. My understanding of what happened is the genre fusion, not a death. A specific UI design patterns (esp. those of point-and-click adventure games) became less popular, but I still wouldn't say they're gone "extremely niche".
> and cut out a huge swath of the most problematic portion of the gaming "community"
I'm sorry but this sounds really weird to me. My understanding is that our world doesn't work this way, so your idea feels strange. It's not like $genre games make people toxic, and if it's gone then those people just vanish off the face of the Earth. If the FPS genre somehow magically disappears, the desire to interact with others would lead them to other pastures.
In other words, moving from VR to AR would require a ton of extra work to integrate the real world. But moving from AR to VR should be much more straightforward, albeit in many ways equally challenging because now you have to worry about two worlds - the normal overlay of AR and the environment from VR.
Now that I’ve said that, maybe I should think of them as more like related disciplines instead of special cases of each other...
I'm not sure what's going to happen first - technology to do proper AR like this, or fast enough, high-resolution (both spatial and color) enough cameras and displays being available to do the "AR via VR" option.
VR is all about total immersion - sights, sounds, movements and everything else. The entire world needs to be contained in the headset, and what's happening outside it is irrelevant.
AR, on the other hand, is about interacting with your surroundings. Immersion is cut down, but the system now has to be intelligent about the outside world - recognizing objects, projecting at the right places etc.
Really? Why not?
First of all, Apple's ARKit (and by extension, Google's ARCore) is not a good experience. It has some "gee whiz" factor to it, and then once you get over it, you realize that there really aren't very many things you can do with it that you can't do better with a smartphone or a VR headset.
Case in point: I spent a lot of time at my previous employer working on self-service/guided-repair AR apps for the iPad Pro. It's a ridiculously bad concept (though my previous employer is kind of known for chasing ridiculously bad concepts). 1) Your hands are occupied with the tablet, so you're not actually able to perform the repair tasks without putting the tablet down, thus largely negating the usefulness of the overlaid 3D animations we created, and 2) who is going to take a $1000 Apple product with them out to a repair job?
We also spent a lot of time making large posters into 2D animations for conference expos. You could have had the same, exact experience for 1% of the cost if you just bought gigantic TVs and turned them to portrait view. I mean, I didn't complain about getting the money, per se, but it just was not good stewardship for our clients to even propose that contract from the sales team.
There are distance measuring apps that presume the tracking is accurate and precise, which it is not. For a very long time, ARKit had very little concept of how far away any given surface was. Instead, it was good about scaling objects so that visually they did not appear to be moving closer to you as the tracking point jittered wildly on the Z-Axis. At best, it's better than walking off steps to get rough measurements of a space, but anything more than that and you're going to have an easier, faster, more reliable time with a $10 retractable tape measure.
Second, the HoloLens is an amazing device, but it's nowhere near ready for any mainstream use. I think it does have a good use case in industrial repair scenarios (which was originally how we got the iPad work, I had built an app for HoloLens that did the same thing, but then the suits got involved and wanted to push our Apple partnership). There are specific features of the HoloLens that make this work, and there are specific features of the job that make the HoloLens work. Specifically, HoloLens' built-in, completely free voice recognition is a huge boon for being able to build systems that you can use while your hands are occupied with the job. Second, having physical objects (the machine you're trying to repair) on which to hang virtual graphics causes a huge improvement in believability of the visual and usability of the application (primarily related to issues of object permanence being hindered by the limited field of view).
That last point wouldn't be so bad if the HoloLens were visually and performance-wise up to spec with much, much cheaper VR headsets in terms of grpahics and visual fidelity. But so long as you're paying literally 10x as much for half to a quarter as good of visuals (not to mention the input systems being otherwise mismatched), you have a lot of ground to catch up on to make an AR-only experience as valuable and meaningful to a user as you could a VR experience.
AR will continue to be a lackluster consumer user experience for a long time. Most developers making apps for these devices don't understand how to make a real AR app. In AR, the user supplies the context to the developer. In VR, the developer supplies the context to the user. This fundamental dichotomy makes good, meaningful AR application design significantly harder than VR design. In my own evaluation, easily 80% of the extant software in the HoloLens and Magic Leap stores take no cues from the user's immediate vicinity, in their context, to which they are tying their experience. In essence, they are just crappy VR experiences.
In comparison: VR is cheap, it works, it is relatively easy to design for, and there are already a ton of great apps available for it.
Now, the Pixel 4 doesn't have Daydream support at all, and it's got a beefy 855 in it.
Gear VR doesn't support Note 10 and there's rumors that their partnership has ended.
The VR industry is moving past mobile VR because it has trouble with user retention. Oculus can abandon Gear VR and Samsung to focus on their own VR solutions from Go to Quest to Rift.
Samsung then is still working on their standalone headset and possibly a next generation Windows Mixed Reality headset
If not for google cardboard I'd never have invested in a vive, and probably written the whole thing off as a gimmick.
Google has already abandoned the project so it should not be an issue to release hardware and related assets under open source along with SDK.
It might allow different companies to continue daydream, instead of let it die.
And once Oculus Link is released, the Rift catalogue will be available for the Quest, and I'm considering upgrading from a Rift now that it seems like a real upgrade. I'd say a product which convinces current adopters to upgrade so soon is a success.
I bought mine in May and have been using it around 6-8 hours a week for working out (Box VR, Beat Saber). It is really a nice device for that, I’m not sure about gaming however.
The kicker of course: it's an untethered system (which is great but sacrifices some visual quality). But even if that's an issue to you, shortly Oculus Connect will be available and you can use it as a viewer connected to your gaming PC.
Quest is the first VR product that actually has the potential for mainstream success. Phone VR is tedious to use and PC VR provides a good experience but is expensive and only a small percentage of adults want a gaming tower PC in their homes in 2019.
Meanwhile, Quest managed to have user retention solely from how seamless it is to strap on a device and get going in VR. This is opposed to the inconvenience from taking out the phone from the case and putting it inside a device.
A seamless, cordless experience like the Quest is probably the future of VR even if it has limitations now and is still too relatively expensive for the masses. The Cardboard/Daydream/Gear VR line of VR is seen as a dead end now. Daydream is dead, and Gear VR might as well be dead.
Google must have looked at what it has on the horizon for VR and decided that it now can't compete with the likes of Oculus and HTC.
VR isn't really about the 3D effect it's about having your head and hands tracked in a 3D space and what can come from that level of interaction. Daydream never really offered this and after using a decent amount of VR I find that the most important part of the experience.
Then I realized it wasn't compatible, but figured I might get a new Pixel when they launched, so I kept it just in case... Whoops.
When did you decide to become an internet troll?
It also explains why:
Daydream-ready phones are built for VR with high-resolution displays, ultra smooth graphics and high-fidelity sensors for precise head tracking.
These requirements were created to ensure a good experience, not to troll you.
I purchased a Moto Z³, because I though it was daydream compatible only to find out Moto Z didn't mean the Moto Z line of phones, it meant a specific old model. Still waiting on a VR headset for it. The screen on my phone is leagues better than the one they put on the Oculus Go I am using.
Dream doesn't work like cardboard. They're completely different. The Dream headset only connects to specific phones and requires its own driver. Cardboard doesn't work with it at all.
So while "Cardboard" can "work" on most phones, it's a pretty terrible experience, once you get past the awe factor of seeing stereoscopic visuals with head tracking. Us developers have largely stopped making anything with basic "Cardboard" support, for fear of giving people a bad first impression of VR.
As for Daydream, a lot of the point of the Daydream system was to get around those issues I mentioned. First, it required phones to meet a certain hardware profile to before they were allowed to market themselves as "Daydream Compatible". Second, it included changes to the operating system to provide much lower latency in input tracking and change display settings to reduce pixel persistence.