
Designing UI for VR Hand Tracking Applications - barbelldan
https://circuitstream.com/blog/oculus-quest-hand-tracking/
======
monkeydust
Been working on a VR dataviz application at work that can run on the Quest.
Started in Jan. Got a few weird looks from top management. Covid-19 happened,
WFH a thing, top management now interested in new methods of collaboration and
wanta demo.

~~~
zmmmmm
Have been thinking of doing something similar. The sad reality is that social
distancing and WFH also impose huge barriers because nobody has a headset and
sharing them is now both inconvenient and unsanitary. Hoping that post-COVID
it doesn't form a permanent setback for headset market growth.

~~~
monkeydust
Lack of access to headsets is a massive issue for the industry. I do think if
people could 'try' XR easily somewhere it would help with adoption but I
struggled to find anywhere around London months back where you could do that.

What is worth noting though is the Qualcomm release* last week on their XR
plans. It is interesting because they are tying it to 5G. Mobile network
operators who have spent billions on the infrastructure and now need to find
compelling use-cases for customers are partnering up with them. This could
mean you get a XR headset with your next mobile phone plan.

* [https://www.qualcomm.com/news/releases/2020/05/26/qualcomm-c...](https://www.qualcomm.com/news/releases/2020/05/26/qualcomm-collaborates-15-global-operators-deliver-xr-viewers)

------
keenmaster
VR hand tracking will be better as an input method when the user has little or
no expectation of physical resistance. Imagine being in a wizard game where
you have hundreds of spells memorized - water spells, fire, electricity,
teleportation, telekinetic manipulation, etc.... all of which are activated by
esoteric hand gestures. Make it an RPG with UE5-demo level graphics and it
will be one of the most mind-blowing games ever.

~~~
GuiA
Step 1: Make an amazing game with gestures

Step 2: ???

Step 3: Profit!

Joke aside, such game mechanics, albeit with cursor gesturing rather than hand
gesturing, have been tried many times before. See Arx Fatalis, Black & White,
etc. Jonathan Blow has a pre-Braid prototype about that as well that he shows
in some detail in a talk he gave.

The reality is that sure, it sounds cool when you describe it like that, but
there are many design issues to contend with:

\- how does the player remember all these obscure symbols ?

\- how do you deal with the frustration when a symbol isn't recognized? When
it's confused by the system with another symbol?

Etc. On top of that, there is the fact that drawing squiggles to cast spells
is not a game mechanic that is inherently extremely fun. Why are people going
to the bother of doing that vs pressing a button?

It's all a lot of work to make it make sense, and there is little reason to
believe that gesturing with your hands instead of your mouse would solve some
of the more fundamental problems.

~~~
keenmaster
The limitations you describe are rooted in a pre-VR world. VR is much more
conducive to gestures. In Half Life Alyx, Valve could have technically allowed
you to lob a grenade by just pressing a button "because it's easier." But they
didn't, and opted for a gesture-based input. They could have allowed you to
turn a wheel by mashing a button, but they didn't, and you actually have to
turn the wheel in circles. These decisions would have been insane with a
keyboard and mouse as inputs. They're natural for a VR game.

Without a keyboard, it's cumbersome to switch between a bunch of spells.
Gestures are both more immersive and functionally better. Maybe most people
wouldn't memorize hundreds of spells, but a couple dozen are certainly
feasible. There could be a spell "language," where the initial part of the
gesture could correspond to the category of spell. Fireball, fire wave,
etc...can start with the same basic gesture. All wave spells can end with the
same gesture. Hundreds of spells can be constructed from permutations of 20-30
foundational gestures, kind of like how a core group of trilateral roots in
Arabic and Hebrew can construct a much larger set of words (sometimes without
previous knowledge of a specific word/permutation)
[https://en.wikipedia.org/wiki/Semitic_root](https://en.wikipedia.org/wiki/Semitic_root)

~~~
GuiA
The examples you mention work well because there is a real world affordance
(the wheel, the act of throwing) that the gesture maps to. I agree that
rotating a wheel gesturally makes more sense as an interaction in VR than the
same action does with a mouse pointer.

The grenade example is a good one, because throwing/flicking doesn't require
much feedback, unlike say rotation or fine grained manipulation. Of course the
interaction would feel nicer if you could feel the weight of what you're
throwing - the physics of throwing objects always feels kind of off in VR
games - but it still works well enough. Note that many games did this with the
Wiimote, and there is nothing about that interaction that is singular to VR.

However, the moment you lose that affordance based in a real world object, and
ask your user/player to make gestures in mid air based on recollection, things
really do start to fall apart. There is a host of HCI literature/projects
around this - e.g. check out papers/demos from the Kinect era. An RPG game
like the one you describe could be made with Kinect, and I vaguely recall a
few in development - but Kinect demonstrated well that the only kind of
gestural interaction that works in long term sessions is dancing.

Btw, this is why if you go to e.g. the VR center in Tokyo, almost every booth
has physical props for you to hold/grip/sit on while you're
fishing/golfing/shooting a gun/riding a bike in VR.

~~~
Miraste
The only gesture input the Kinect could recognize reliably was dancing. I
don't think that says much about the appeal of gesture controls in a system
that can see finer movements. As a counterexample, iOS and MacOS have a lot of
gestures which aren't based on real world motions, and people use those all
the time.

~~~
GuiA
Kinect offered good full body skeletal tracking. Saying that it could “only
recognize dancing” is objectively wrong. And even if that was the issue, look
at eg Magic Leap, which had excellent finger tracking. What compelling things
were built for it?

The issue is that “gestures” are not a binary thing that you do or not; they
live on a continuum. What is waving vs raising your hand to swat a fly?
Categorizing these “gestures” in a meaningful way and layering effective
gameplay on top of it is the challenge, that pretty much no one solved despite
the huge incentives.

Gestures on two-dimensional multitouch screens are not what we’re talking
about here (and even then, you can’t expect people to remember much more
beyond the 5/6 core ones).

------
warp
Using _just_ hand tracking seems like a bad way to interact with anything.

I have an Oculus Quest, and the hand tracking is like the Kinect all over
again -- without having physical buttons every interaction is slower because
the software waits for you to keep a particular gesture/pose long enough at a
particular location to register as a "click".

Obviously the tracking in Quest is more fine-grained than the Kinect was, so
perhaps we'll see some interesting uses for the technology in the future. But
right now I'm not sold on the tech.

~~~
zmmmmm
Yes, it's disappointinly hard to use the whole UI with hands compared to the
controller. I guess it's a mix of improving the tracking and getting the UI
right. But even from where we are now, projecting forward to perfect tech I am
still not sure that I want to use hand tracking over controllers for most
things. A few types of games and other applications could clearly benefit, but
mostly controllers >> hands.

------
Qahlel
VR will never take off (at least in our lifetimes):

1\. battery issues. 2\. weight issues. 3\. quality issues. 4\. VR is a niche
market.

~~~
dividido
It's already taking(taken) off. I don't think any of the 3 points up top are
typically valid and linked together. I use an "old" oculus rift setup that is
tethered to my desktop with a 1070ti. The only batteries to speak of are in my
hand controls and they last months. The head piece is not heavy, so you might
be referring to a wearable pack seen in vr installations mixing vr in a closed
set/space?

Quality wise, visuals are pretty amazing. Good enough on mainstream games like
Half-Life Alyx, GTA5, Pavlov and Lone Echo to make me think and feel like I'm
in the environment. As a private pilot let me tell you that flying in vr in
X-Plane 11 is just amazing. 360 views and interactions with knobs/levers in
the cockpit. Even more, utilizing more advanced setups in unreal engine and
megascans/quixel goes to another level.

It's still not entirely accessible to everyone in terms of cost as it's not
cheap. Especially if you want the "best" graphics which requires the headset
and a computer with decent gpu. So I do agree with point 4 and being a niche
market.

I wonder what "taking off" to most people is? It's definitely not Lawnmower
Man/the matrix/ready player one yet. It's cumbersome when using a tethered
setup and room scale is amazing but I don't exactly have a dedicated play
space to wander within or a specialized endless treadmill.

It does however provide an interesting escape with unique experiences.
Something as simple as google maps is mind blowing when traveling around the
world. I was recently working on a house project and needed to figure out the
remodel. I modeled the house in Maya, imported it into UE, and was able to put
on my oculus and "walk" around the space.

For me it's hard to go back to "flat gaming" after vr. I really enjoy it and
look forward to seeing how it continues to evolve.

~~~
outworlder
> It's definitely not Lawnmower Man/the matrix/ready player one yet

I think we have surpassed Lawnmower man already. We are only missing the extra
hardware, like the full body suits and moving beds/leonardo devices. And even
then, not really: Lawnmower man happened at a well funded laboratory, the
scientist just had some expensive company hardware at home. We have better
quality headsets at home today.

Ready player one was... inconsistent. They handwaved the movement problems.

> if you want the "best" graphics

This will always be true, VR headset or not. If you want the 'best' graphics,
you always have to for over a lot of money for specialized GPU hardware. The
thing is, it's perfectly fine to use an untethered Quest to play something
like Superhot or Beat Saber.

> For me it's hard to go back to "flat gaming" after vr.

This is why I think the market is going to explode. Every single person I've
shown it has walked away impressed. Young, old, doesn't matter. Even non-
gamers.

Oculus Quest and similar devices are on the right track.

> I modeled the house in Maya, imported it into UE, and was able to put on my
> oculus and "walk" around the space.

I'll definitely try this!

I wonder when 3d Modeling will be primarily done with a headset. Like, would
it be useful to do the initial modelling in Maya itself?

~~~
dividido
> I wonder when 3d Modeling will be primarily done with a headset. Like, would
> it be useful to do the initial modelling in Maya itself?

I think the desired goal would help dictate if you use VR. For example, for
the Lion King remake the world was entirely cg. To do the rough and final
layouts of the environment multiple people (director/art director/modelers/set
dressers/dp) all went into vr (sometimes together) and used tools in UE to
place/scale/rotate objects. This made a lot of sense b/c they could more
accurately place and model the environment within it.

It could be quite useful when modeling but I wonder at what point make sense.
I've done a fair bit of modeling which involves a lot of manipulation of
points and faces and also involves lots of object tumbling, which might prove
tiring in vr. But I think it ultimately makes sense b/c it allows for a
3dimensional view of what you're doing.

~~~
outworlder
> To do the rough and final layouts of the environment multiple people
> (director/art director/modelers/set dressers/dp) all went into vr (sometimes
> together) and used tools in UE to place/scale/rotate objects.

I didn't know about that. This is amazing.

> I've done a fair bit of modeling which involves a lot of manipulation of
> points and faces and also involves lots of object tumbling

I've done some hobbist-level CAD. Very often I had to turn the object slightly
in a couple of directions to get a better sense of perspective. That comes for
'free' with VR, and you can also move your head, or yourself, without turning.

Mind you, I'm picturing working while sitting down so as not to be too tiring.
Fine vertex manipulation may still be tiring, but I'm thinking organic
modelling would benefit. Also specially for CAD, when you want to visualize
and "explode" your model. Normally it involves lots of camera moving around.

