
I'm making 30 VR projects in 30 days to learn - risons
https://risonsimon.com/days-in-vr/
======
olivierva
I do like it when people set themselves challenges like this. Clear beginning
and end, finishing something in combination with iterative learning is very
satisfying. A bit like a hackaton or a game jam. But this made me think,
because in a way it limits yourself in what you can do with what you
previously learned. The more you learn the more complex projects you can
create and halfway through you can create something which doesn't fit in a day
anymore; complexity pushes build time up exponentially. E.g. it needs some
extra tooling for generating procedural content. So what I propose is instead
of 'over the course of 30 days I will every single day finish a project', why
not follow the Fibonacci sequence: I start with nothing (procrastinating),
next a one day project, followed by another single day effort. Stepping up
with a 2 day project -> a big 3 day project -> 5 day full blown project -> 8
day epic. And finally: a full 13 days working on a single masterpiece! (33
days in total).

~~~
xiaoma
With this scheme if you do even 15 projects, you're already into multi-year
time frames and your 25th project will take 200 years. Fibonacci numbers grow
quickly!

~~~
olivierva
Knowing when to stop is one of the most important things about delivering a
project. After trying this method a couple of times I think many will have
learned when to draw the line.

------
zerr
Can anyone explain what's radically different in "VR dev" compared to "3D dev"
(from the graphics programmer perspective)?

~~~
avaer
You render twice per frame: half the "screen" rendered with a camera at
([-eyeOffset, 0, 0] * cameraRot) and the other rendered with a camera at
([+eyeOffset, 0, 0] * cameraRot). The main thing is this has some surprising
performance implications (such as geometry complexity being more important
than shader fillrate).

You also need to keep an untraditionally high frame rate without dropping
frames (90 FPS on the desktop headsets), which also has significant
performance/app architecture implications.

It's actually not that radically different in terms of graphics; a game
programmer should feel right at home.

The harder part is the UX implications when you realize "controlling the
camera" is no longer in your hands. That might require fundamentally
rethinking how your game and/or app functions.

~~~
jsheard
There is one major difference in terms of graphics programming - deferred
rendering doesn't work well in VR since it's incompatible with proper
multisampled antialiasing, and the edge-detect and/or temporal AA methods
typically used instead are too blurry when combined with the low perceived
resolution of today's VR headsets.

For this reason there's been a trend back towards forward rendering, with some
modern twists to efficiently handle many dynamic lights like deferred does.
UE4 for example:

[https://docs.unrealengine.com/latest/INT/Engine/Performance/...](https://docs.unrealengine.com/latest/INT/Engine/Performance/ForwardRenderer)
| [https://youtu.be/6kfMVxNSowM?t=3046](https://youtu.be/6kfMVxNSowM?t=3046)

~~~
sillysaurus3
In the youtube video, they mention that moving lights can't cast shadows.
[https://youtu.be/6kfMVxNSowM?t=3276](https://youtu.be/6kfMVxNSowM?t=3276)

That's a significant limitation for a modern technique.

Here's the full algorithm for anyone curious:

> The Forward Renderer works by culling lights and Reflection Captures to a
> frustum-space grid. Each pixel in the forward pass then iterates over the
> lights and Reflection Captures affecting it, sharing the material with them.
> Dynamic Shadows for Stationary Lights are computed beforehand and packed
> into channels of a screen-space shadow mask, leveraging the existing limit
> of 4 overlapping Stationary Lights.

~~~
jsheard
That was a limitation of the initial implementation in UE4.14, not the
technique itself. They iterated on it in UE4.15:
[https://www.unrealengine.com/blog/unreal-
engine-4-15-release...](https://www.unrealengine.com/blog/unreal-
engine-4-15-released)

> Forward renderer now supports shadowing from movable lights and light
> functions.

> Only 4 shadow casting movable or stationary lights can overlap at any point
> in space, otherwise the movable lights will lose their shadows and an on-
> screen message will be displayed.

~~~
sillysaurus3
Excellent! Thanks for passing along the technique. This is an interesting
evolution.

------
blurrywh
Slightly OT:

Google Trends on VR search queries:
[https://trends.google.com/trends/explore?q=vr,psvr,%2Fm%2F0k...](https://trends.google.com/trends/explore?q=vr,psvr,%2Fm%2F0knkq2w,Oculus,Vive)

(interesting here is the decline after the peak)

Same query again but with 'iphone' this time (to represent the mobile space
and its size compared to VR):
[https://trends.google.com/trends/explore?q=vr,psvr,%2Fm%2F0k...](https://trends.google.com/trends/explore?q=vr,psvr,%2Fm%2F0knkq2w,Vive,iPhone)

~~~
feiss
"Vive" means "lives" (verb) in spanish, so I guess that's why it shows more
popular in spanish speaking countries.

------
risons
Thanks for the overwhelming support guys. The entire code is also available on
github at
[https://github.com/viewportvr/daysinvr](https://github.com/viewportvr/daysinvr)

You can also ask me any questions, if you have any.

~~~
regnarg
So where you going from here? It would also be great if you could do a little
write up of what you learned!

~~~
risons
Absolutely! Once the 30 days is over, I'll be writing about what I learned.

I'm also building a product to make VR dev easier at
[https://viewportvr.co](https://viewportvr.co)

~~~
vanattab
I would like to learn more about viewport but when you click docs it requires
you to make an account first... if I have to make an account just to find out
what your product does I leave and never look back.

~~~
risons
it doesn't ask you to create account. It asks you to give your email address
so that we can send you the docs :) You can also try the 360 photos demo at
[https://viewportvr.co/demo](https://viewportvr.co/demo)

------
chidambarsk
The best way to learn. I would look forward to it. I am pretty eager to set
myself a goal like this.

~~~
risons
Absolutely. You should try it. :)

------
Vinaiah
Seems like a great project. Would have loved to see some more concrete
projects though; maybe do a 10VR projects for 30days and have something more
substantial .. A+ on effor though

------
enturn
Another challenge is desktop streaming. I achieved it using webvr-boilerplate
(threejs) and jsmpeg-vnc. I still need to do lots of tweaking but it works
quite well over LAN to mobile.

~~~
risons
share it with us :)

~~~
enturn
The other projects did most of the heavy lifting. jsmpeg-vnc does the screen
capture and sending the data over websocket to jsmpeg which renders it to
canvas without buffering. I then took the canvas and used it as a texture in
webvr-boilerplate. I needed to make sure the texture was a power of 2 and set
texture.needsUpdate to true on each render update. Three.js made it easy to
switch the cube out for a plane which I moved a bit closer to the camera. The
command arguments I used for jsmpeg-vnc: -b 1000 -s 1024×512 -f 60 -p 9999
"desktop". jsmpeg-vnc doesn't support sound which doesn't bother me since I
can use the computer speakers but I'd like to add mouse capture for when the
headset is on. And here's a screenshot of it on my Moto G (2nd gen):
[http://imgur.com/a/uhRuN](http://imgur.com/a/uhRuN)

------
paxcoder
What's holding the camera in the day 2 project?

~~~
brink
I'm guessing the rod holding it is patched out using software.

~~~
jesly_varghese
Yea that is generally the approach, the tripods are a common artifact in 360
degree videos and photos, if is of professional quality it gets blurred or
patched some other manner.

~~~
paxcoder
I don't see the patch, do you?

