Hacker News new | past | comments | ask | show | jobs | submit login
Start multi-tasking with your virtual reality headset (getspace.io)
118 points by T-A on June 29, 2016 | hide | past | favorite | 72 comments



Despite plenty of studies showing multi tasking for humans is less efficient than single tasking ( e.g. http://news.stanford.edu/2009/08/24/multitask-research-study... ) here we are inventing new ways to be even more distracted.


It's a false assumption that multiple windows equates to multitasking. Just because you have multiple applications open doesn't imply you are multitasking.

This is an angle of VR that hadn't occurred to me other than entertainment.


Their product pitch is (I quote from their website);

> Start multi-tasking with your virtual reality headset


When it comes to technology, multi-tasking is completely different.

For iOS, some "multi-tasking" features include "switch between apps by double clicking the home button and selecting a recent app", and split view.

Windows considers "window snapping" to be a multi-tasking feature.

As we all know, multi-tasking isn't really thinking about two things at the same time, it's switching between those two things very rapidly. When it comes to technology, you can hardcore multi-task by having 5 video chats going while watching a youtube tutorial and writing your grocery list (which is more along the lines of the study you linked), or else you "multi-task" by having plenty of windows open in a larger viewing area to lessen the need to constantly switch between apps.

e.g. As a web developer, I would not be productive without switching between code, terminal, and chrome.


This is an issue of semantics. What the studies above are referring to are humans attempting to take on multiple tasks at once. The product pitch is referring to the machine accomplishing multiple tasks at once.


Well, even though they market it that way, it's more like a multi-monitor setup, which is totally valid to have for, at least, a developer.


There's a difference between your computer multi-tasking and you multi-tasking.

Right now, I have two shell windows open, textmate, and two browser windows open -- one in jenkins, one in aws. But they're all related to the project I'm actively working on.


While I can agree with that, currently I do most of my work on a 17" laptop. I don't need to multitask to benefit from this - being able to keep working on a laptop but having a workspace big enough to bring up a site I'm working on at full size, while having plenty of space to be able to also see developer tools, my editor and reference material at the same time without having to e.g. switch workspaces or cram any of them into less than ideal size spaces would be great.


You look at it the wrong way.The point is to make the system multi task.Its like you opening several tabs/windows. I often spend time browsing HN while my project builds in the background.Similary while a site I am opening opens(yes this is a thing because developers love making heavy websites these days),I shift to other tabs.

And when you have to port design specs to code.It is much easier to just turn your head instead of switching tabs/windows. This would indeed be a much better option.


Why do you say that? I'd love something like this product, and my work flow is single tasking. I have all my windows open in full screen all the time and switch between them, and usually have a movie playing on the second screen while I work.

Yes, it takes me a day to watch an hour's worth of movie, but it's a nice and easy distraction when waiting for stuff to build or doing other low cognitive impact work.


That's what's commonly known as multitasking...


I said it because their product pitch (from the website) is;

> Start multi-tasking with your virtual reality headset


And so what if studies have shown that?


Nothing to do with "multitasking" - instead of putting real, cost-$, monitors around you, just put virtual, free, monitors around - as many as you wish!

Sounds like something awesome, but kinda impractical for now - at least not until the Oculus-grade headsets become the size of eyeglasses.

I guess that should work great with AR headsets, since these won't constrain your situational awareness.


The bigger problem is pixel density.

I've tried both the DK1 and the vive, and the vive is basically at the point where games are very playable.

Some people complain about the "screen-door" effect, which basically translates to the pixels are so large that it's like you are looking a screen door. But in my experience that's not a problem. If you stop and concentrate you can see the pixels, and they are quite large, but with game-like graphics your brain is more than happy to fill in the gaps between pixels.

This doesn't translate to text. Text needs to be massive before you can read it. There is no way you can have multiple monitors at desktop distances. Best case is a single, low resolution monitor (about 720p resolution) so close to your face (or far away and massive) that it takes up your entire field of view. To see any extra monitors, you would have to rotate your entire head.


Resolution is the reason I haven't bought one yet.

When the 'in-world' resolution can match 1920x1200 at 2ft then it will replace my 3 monitors otherwise it's a headache inducing experience.

If the technology takes off and follows the usual S-curve I think we'll have that inside 5 years.


When the 'in-world' resolution can match 1920x1200 at 2ft

Sitting about 2 ft from my 24" monitor at work, I'd estimate that it would take 9 such monitors to get to around the same field of view in my HTC Vive. So you're talking about 5760x3600 pixels there.

Maybe we're going to need iris tracking as an optimization? That's a lot of pixels to push at 90 Hz.


> That's a lot of pixels to push at 90 Hz.

It is but considering the current pace of GPU development it's what 5-6 years away on the really high end and maybe 7-8 on the midrange $200 cards.

VR has already re-invigorated the competition between AMD/nVidia as they both want to get a foothold in the potential new market, interesting times.


Agreed that pixel density is an issue, but that can be worked around via supersampling. I just saw the guys at /r/vive discovered a global supersampling setting. Someone post this before and after:

http://imgur.com/a/uDTwP

Of course, the issue now is GPU horsepower. Rendering two viewports at 1200x1080 at 90fps is no easy task. Supersampling at 2x means rendering 2400x2160 per eye at 90fps. Anything short of a 1080 is going to have performance issues at this level of supersampling, and even then it depends on how complex the game environment is and its settings. I messed around a bit with BigScreen and I can certainly see the potential for VR as monitor replacement for $some_tasks. I have yet to see supersampling settings for it or if these global settings affect 2D screen projections in 3D space.

So yes, current densities might be enough to get by without fuzziness and hard to read text if you have enough GPU muscle. I don't think we've figured out all the tricks with first gen VR yet. There are going to be a lot of little surprises like these I suspect. At the very least we know AMD and Nvidia are supporting performance enhancing features that no one has really implemented yet.[1] I suspect the Vive of June 2016 is going to be a very different experience compared to the Vive of December 2016. The same way console launch games don't look as good as games released towards the end of the console's life.

[1] Yesterday one of the PoolNationVR devs said he got a 20% performance increase using Nvidia's Multi-Res Shading. This change goes live in July. A 20% performance increase at no cost? Crazy.


I wonder how many good tasks there are suited to the current pixel density. You can't read text across a 12 virtual-monitor array, but there must be other things that work well.

Maybe it could replace a wall of monitors displaying surveillance footage in a security room. You could even do impossible things enlarge the screens with lots of movements, and shrink or fade out ones without any motion detected. Show each video feed overlaid on a map of the premises.

I have no idea exactly what daytraders are looking at, but I know it's a stereotype for them to have tons of monitors. You could spawn a few dozen virtual screens displaying charts, have indicators when a chart behind you changes within some rules you've defined (something similar to the "you're being shot from behind" indicator in shooter video games), maybe with different colors, shapes and scales depending on the rules you set.

There's probably a ton of potential being passed up just because these don't do text very well yet.


Hm, i was expecting the "rotate head" to be the way it works - most headsets support that, and it feels natural that way.

I'm not really sure how big the text should be - in the games it looks natural, but there is no sense of scale.


That probably is the way it works. But the problem is that the resolution of the VR devices at the moment is 2160 x 1200, i.e. the equivalent of a single monitor.

This monitor is responsible for filling your entire field of view, therefore the pixel density of any 'virtual monitor' must be quite low, therefore it'll not work 'like you're sitting at a desk of monitors'.

Imagine the inverse. Imagine you've got a single curved monitor at your desk that fills your entire field of view. Sounds great right? Now imagine that monitor is 2160 x 1200. Not so great.


Pixel density is a huge issue. With current Oculus and Vice displays, you can not only distinguish individual pixels but also see the red, green, blue subpixels.

I can't imagine doing work with that. Only some specialized tasks could work, I can see how Tilt Brush could work in CAD work. But projecting your usual 2d app windows to VR sounds awful with the current consumer products in mind.


This hasn't been the case for me. I don't really have much problem reading text on the CV1 at all. The fact that you're not looking at a static image makes a pretty big difference. Tiny head motions give you kind of a temporal anti-aliasing effect that makes the resolution seem better than it is.


How about Google Cardboard and the EEE Keyboard?

They are both pretty light-weight.

Finally we could look like the keyboard cowboys from Neuromancer ;)


I think portable VR + something like this [0] has massive potential.

0: http://www.theverge.com/2016/1/8/10738792/harman-touchless-u...


Or if they get eyetracking+foveated rendering. Then you get super effective navigation between 'monitors' (just look at one) and the rendering isn't so demanding, enabling either more sampling or higher-resolution displays.


In terms of programming this is nearly useless. I have to change my font size usually 4-5x the normal size I have in my IDE to even feel comfortable looking at it.


Hopefully it is just a question of time before we have retina resolution even for panels that are a few inches from our eyes and magnified with optics.


Never mind properly using the hardware and its capabilities. A headtracking VR setup has other constraints than a static rectangle. Arranging overviews, subpanels, minor buffers could probably be improved.

On the other hand, we still haven't escaped the VT100 in 99% of our IDEs so I'm not holding my breath for an ultra-immersive new-fangled "I can breathe code" style.


Yeah, projecting a 2d desktop onto the surface of a sphere is I guess pretty neat, but not a great use of 3d space. Why not have applications be actual physical objects you can walk around? Why do they need to have 2d view ports? What if you could pick an app up and turn it to the side to change settings on it with physical sliders and knobs?

Just sort of pie-in-the-sky-- imagine a programming language that worked like a 3d spreadsheet, where your functions where literal 3d objects you could wire together, and the parameters had physical affordances so you could adjust them...


This varies between people - i.e. I find it uncomfortable when the font is too large, since it feels like looking at an elephant through a microscope. The more code fits on the screen, the better.


I've read that you don't need to go through Greenlight for VR apps right now, just to contact Valve. Not sure if that's true, but might be of help if it is.


Devs on /r/vive claim this is true. Appparantly, Valve is hurting for good VR content and will fast track anything VR related.

It also doesn't make much sense to Greenlight VR. You need x amount of votes, but because the VR population is so low, its much harder to get those votes compared to a traditional 2D game.


Exciting and innovating as this is; this particular idea seems like it'd be more apt in an AR setting. It could address the issue of non-touch typists, those of us that use notebooks on our desk, also working in an office, being able to see people can be quite handy.


It seems to be a new Shanghai-based company started by an ex-Google engineer this year. I found some news in Chinese (http://qianbidao.baijia.baidu.com/article/503760), which says there will be a specific 3D engine developed for SPACE, not using Unity. It is exciting to see the 3D desktop/multi-tasking environment become popular one day.


Can anyone who has used this comment on it's general usability?

For instance, if I lean in closer to a space will it get closer, or is my head position locked?

Would this be useful for coding? e.g. is text clear enough?

What kind of resolution is needed for text to approach the level of retina displays that we have become accustomed to?

Does anyone have any thoughts on developer experience... toolchain and IDE opportunities that would take great advantage of GetSpace?


> Would this be useful for coding? e.g. is text clear enough?

It's not clear enough with todays headsets but who knows what the next gen will deliver.

It could be useful to use a headset like this when the light from a normal screen would disturb others


I doubt we'll reach parity with my multi-4K setups in non-custom hardware for another decade or so (and this is the first generation where I no longer want for increased pixel density in my monitors.)

I should try and see how well a 80x25 terminal works on current gen tech though.


If it's something like Oculus Rift, then yes - you can move your head around and lean closer, for example.

If it's something simpler, with no head tracking, then no - you are fixed in one place.


> if I lean in closer to a space will it get closer

Yes, that is how it works.


> if I lean in closer to a space will it get closer

Anything else would be stupid.


Demo Video (hidden in the press-kit): https://youtu.be/B4JqN0uxnzw?t=16


Nice catch.

I think their concept is brilliant. A bit hard to gauge in a 2D video.

I so want to jump on the VR bandwagon. But... my brain is telling me to hold off for V2 or V3 when the hardware will likely be orders of magnitude better than the current crop.


I've been playing with this sort of idea on the HTC Vive, and I strongly believe that this is the future. Unfortunately, none of the available VR devices have anywhere near the resolution needed for this to be a workable proposition. Give it another 5 years or so.


So excited we live in the future. I talked about this idea with so many people when we first put on the Occulus prototypes so long ago. It's awesome to see someone putting in the effort to make it a reality. Can't wait to see this application of VR develop. After all, the foundational elements could also be used in an AR type setup if that technology approves more human-centric in the long term. Especially in the workplace, having your field-of-view completely covered by a screen is probably not the most socially ideal.


This reminds me of SphereXP. For anyone who doesn't know, it was a 3D desktop environment from years back.

https://www.youtube.com/watch?v=PhLbDyE-MQc

Obviously this takes the concept and advances it somewhat, but overall I'm glad to see this sort of interface returning. It was a fantastic experiment and worked surprisingly well.


Also back in the day, SpaceTime 3D: https://www.youtube.com/watch?v=EMw7ftk5-1s


It sounds like there's more than a few people working on this idea now.

I just feel like it's a little early to be expecting users to go about multitasking like a desktop when we're still at a point in the evolution of VR tech where prolonged sessions of VR HMD use is not common behavior. Even getting users to put the headset back on after stepping away, is still a challenge. Not to mention the difficulties of reading text in VR among other issues.

That's not to say there isn't a use case for a multitask desktop in VR, it's just that this is a problem to be solved in the future. In the present, we simply need software that gives users a good reason to put the headset back on again after the novelty wears off


I've been thinking about the same thing. The window manager needs to be built into the OS, really. I think we'll see something built into Android for this soon -(standard 2d app surfaces in a 3D360 VR environment).

I'm not sure the current display tech is that good for your eyes/brain to be staring at it 8 hours a day, never mind the current screen resolution. But eventually it will be, and using something like this will enable you to work anywhere without having to carry large screens with you.


That's like looking a black and white movie on a color screen. Why are they still using a 2D window manager when we could take advantage of the full 3D spacial navigation?


uh, because a GUI based on a flat "desktop" metaphor with overlapping square interfaces works so damn well.

And the places where the metaphor differs from a RL desktop its because its more convenient that way - say like how documents (windows) have no thickness or how its easier to look straight ahead with a 90° to a surface instead of downwards onto a desktop at an angle?

this concept actually takes advantage of 3d spacial navigation by basically giving you an functionally unlimited of desktop area.


It's not an unlimited desktop if they are projected inside of a 3D sphere. Granted text is best displayed on 2D tiles that you can look straight ahead, but I was talking about navigation. There is only one text that you can read at the time, the rest doesn't have to be arranged in a sphere. It could be laid out around the room or even in bigger fields.


Most legacy apps are 2D surfaces.

Or do you mean why the 2D windows are not composited more in the 3D space (they all seem to have the same Z depth)?

Well, you would need a 3D input devices as precise as the veritable mouse, which they are using in this demo.


Fair enough. From the home page it looked like each tile might be holding multiple apps. It's a good first implementation but hopefully in the future apps will be 3D-compatible.


What would a VR file manager look like to you, for example?


I know people who love to organize their icons on the desktop, and lay everything out there. It helps them find things back more easily. I think it's related to the method of loci where spacial categorization is a way to recall things more efficiently.

It would be interesting to try to tap into that. Maybe create different places with a way to navigate between them and place objects in them. It doesn't have to be exactly like reality, it would be too constraining otherwise. There could be shortcuts and different zoom levels to information.

Maybe there would be a static foreground like a desk that provides a work space and and reference anchor for the mind. And then the background could move independently, be a galaxy of information the user can sift trough using various tools.

One interesting development will be how symbolism will be developed. We need visual cues so we can recognize different kind of objects. This will help us navigate around and also know how to interact with each item. I don't think everything should be text, it's not be very practical in 3D.

But to answer your question, I don't think a file manager will exist (in the traditional sense). Right now each application takes over the whole environment but at some point there will be a second layer, a common environment that holds all the objects. This will be the normal OS-provided environment. The file manager will be replaced with tools that help you categorize and search the objects but it will be different because they would just interact with the universe like any other tool.

PS: Or maybe I'm thinking too much of Smalltalk


Would pay for something like this with Hololens + my own keyboard and mouse. Virtual desktops in Rift or Vive give me headaches and I use the headsets everyday.


Interesting proposition. If the image quality is good enough, then even a good VR headset might actually be less expensive than an array of physical screens.

Obvious disadvantages: having to wear a VR headset all day, and you cannot collaborate with others over your screen contents anymore (unless they strap on and plug in a second headset maybe).

Another thing: This would be really cool for desktop sharing.


> (unless they strap on and plug in a second headset maybe).

On the other hand with a high resolution headset and good bandwidth, you can pair review/code review with anyone/anywhere and they see exactly what you see.

In the longer term you could create a 3D virtual office where all the screens exist in the same space and you can jump from one set to another, as a collaboration tool that would exceed a real world office in some ways since you there would be no physical limitations.

Could be incredible for online tech-talks and such as well since you wouldn't be sat 50ft from the speaker trying to read slides from a crappy OHP.

This is the side of the technology that really excites me, not the games.


for online tech talks, you could also have some kind of phone app that would show the slides


The constant rocking from head movement would make most feel sick before they got any real work done I imagine. Seen in the demo video here: https://youtu.be/B4JqN0uxnzw?t=28 I know I'd struggle to focus on lines of code or paragraphs of text with it constantly moving around.


That's how your head moves constantly yet you have no problem focusing. The reason you don't feel sick is because it's your own head. In that video you're watching someone elses head movement.


I tried VR and I feel it is approaching FPSs for me. I never could play FPSs because I always felt a bit of nausea after some time.

So, VR coding? Could it be the first time code literally makes me sick instead of just figuratively? :)


As someone extremely familiar with the current state of VR tech, this is simply a non starter right now. The level of resolution, pixel density, and comfort of current gen HMDs is nowhere near what it will need to be to be an actual useful productivity tool. It's great for gaming, but we're just not there yet for every day use.


That's strange, how is this better than https://www.youtube.com/watch?v=yXEE8R4UUuc ?


I think using this for an hour will make you tired quickly. You're really staring at some screen centimeters from your eyes.


Not important for devs until it works in MacOS and Linux. Until then, this is a toy for a toy OS.


I came here to say this.


Is this the VR-desktop monitor replacement I dreamed of?


No.

But give it 2 or 3 more hardware generations and maybe.


Nice. I'd really love to experience this!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: