for a few weeks now - far from full time as the HMD display, lenses (CV1 here) and positioning/rotation tracking are not good enough with the open source driver but it feels really close. Except for some tweaking and extra "spacing" between the focus window and the side windows, it works exactly as I
want it to. I added some customisation so the 'layers' do not exist at different depths but on different arcs on the same layer. Switching "workspaces" simply rotates these layers to the 12 o clock position. Biggest problem right now is not having front cameras to see what my hands are doing.
:) Neat. Sadly, setting that youtube video to 720p-ish conveys my experience of reading text in a consumer VR HMD, which makes them hard to prototype with. Years ago there was a Java (research?) IDE (which I'm currently failing to quickly find), with a big window as workspace, and on it, assorted task-specific little windowlets, like an individual function and datastructure and bit of call stack. I wonder what a 3D/VR IDE might look like if the "windows" were similarly disaggregated and bitesized?
I tried it on my vive pro, the head / position tracking was not good enough (comes from openHMD reversing project) but I found the text good enough to use with emacs. My problem was the lenses and glare from high contrast color scheme.
On PenTile-subpixeled Vive, where only green is full resolution, I was using a green emacs theme like base16-greenscreen-dark. But even on an RGB WMR, with custom subpixel rendering and tiny fonts... with centered circular regions of non-blurry pixels only say 500 px across, it just wasn't viable for me. I've heard of an ops person being happy, focused on terminals rather than on editing code. And as I'm on linux, and focused on programming in VR rather than games, I've been using custom browser-as-compositor stacks, with only low-level lighthouse drivers or optical tracking. Part waste of time, but part road less traveled. At VR a meetup yesterday, a well-informed speaker said "you still can't do X", and I was like, I've been trivially doing X for years now. There's been a lot of that.
https://arcan-fe.com/2018/03/29/safespaces-an-open-source-vr... desktop/
for a few weeks now - far from full time as the HMD display, lenses (CV1 here) and positioning/rotation tracking are not good enough with the open source driver but it feels really close. Except for some tweaking and extra "spacing" between the focus window and the side windows, it works exactly as I want it to. I added some customisation so the 'layers' do not exist at different depths but on different arcs on the same layer. Switching "workspaces" simply rotates these layers to the 12 o clock position. Biggest problem right now is not having front cameras to see what my hands are doing.