Hacker News new | past | comments | ask | show | jobs | submit login

I hope that you realize that 8k/eye at 120Hz is in the 100Gb/s range with compression, that you won't see because it's already a retina display (60ppd) at 130deg view angle. Also there are NO GPUs that could come close to rendering a single eye today at even 60Hz (even 30Hz would be absolutely top end).

So, I think your requirements are completely unrealistic, but I agree that the current VR (1k/eye at <90Hz with 20-30ms latency) is unusable and gives me a headache. I suspect that around 2-3k/eye at 90-120Hz with 10-20ms latency will be sufficient to be usable.

Unfortunately, that almost certainly means Foveal rendering (since UHD at 60Hz is too hard and 120 is right out), which will take some time. However, it probably also means that the bandwidths might be possible to untethered mulit-Gig wireless. Having an unteathered system that used a high powered GPU would be really nice. <edited to add> http://research.microsoft.com/en-us/um/people/johnsny/papers...




I said that was when I would go back to VR. Of course I know we don't have the consumer hardware to run it today. But we will in 10 years.


As slow as growth has become in the graphics card and general computing industry, getting to the point where we can do 120hz at 8k per eye is a lot further than 10 years off. It will require a significant change in how (where?) we're generating graphics, not just better hardware.


You're absolutely right... But by rendering separately the high resolution and low resolution views (each at 1k, but one 4x upscaled) we could do 60Hz today. It only requires 2 FHD renders (which is less than a 4k 2eye view at UHD) so 60Hz is very reasonable... possibly 90-120Hz. It also cuts the bandwidth by 8x (2/16) without compression.


There are so many new rendering tricks popping up all the time... new APIs also give less overhead, which is nice.

Either way...

580 gtx gflops: 1581

1080 gtx gflops: 8228

7 years. 420% increase in float calc perf.


And you only need high resolution and high color at the fovea. Eye tracking will be the key to making VR/AR low-power and high-quality.


It won't, no worries. Eye tracking in an HMD, whether AR or VR is hugely complicated from engineering point of view. There is a reason why an HMD eye tracking kit costs several thousands of USD.

Moreover, from the rendering point of view, foveated rendering is a fairly complex thing to integrate into a 3D engine too. It is definitelly not "free".

Foveated rendering is certainly no panacea.


No panacea but another important piece of the puzzle. The software part may be complex - but it is just software, and once done, we all benefit from 10x battery life. Eye tracking hardware is complex but Lots of ongoing R&D - the outcome of which will be sensor chips which can be added to HMDs.


Only 6.5 years :)


Assuming that it's feasible to do ~1080p per eye at 120hz today, 8k per eye is only 16 times more pixels. And considering that increasing the pixel size is embarrassingly parallel, I don't see that as a problem to be able to do in 10 years.


Looks like current technology is 1000x1000 per eye at 90Hz: http://www.digitaltrends.com/virtual-reality/oculus-rift-vs-...

8k x 8k per eye at 120Hz is 64x more pixels at 1/3rd increase in frequency ~= 85x more processing power. Making the (maybe faulty) assumption of doubling processing power every 2 years and that current setup is processor limited, this sort of processing power is ~13 years away.

Same computation but with 4k x 4k per eye predicts ~9 years of progress needed.


I guess I misspoke about being 1920x1080 which is a "2K" screen split in two. An 8K screen split in two would be ~4000x4000 per eye which is still 16x as many pixels as I said, plus the 33% increase in frame rate which I didn't include which matches your second one. Although with how embarrasingly parallel it is, I don't think it's as far off as it seems. Especially considering that it's the previous generation graphics cards that can handle current day VR fine so we're 1-2 years into the 9 years we have to wait, and with so many pixels anti-aliasing can probably be turned off completely. You could probably build something today that could do it, it just would be very expensive and I don't think 8K panels at cell-phone size exist yet.


Also, eye-tracking + foveated rendering will severely reduce the load. Once that works reliably, you just need the cheap, super-high PPI, low-latency screens (which might almost exist today, though at high cost due to lack of a mass-market).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: