Hacker News new | past | comments | ask | show | jobs | submit login
Facial Performance Sensing Head-Mounted Display [pdf] (hao-li.com)
17 points by MichaelAO on July 7, 2016 | hide | past | favorite | 4 comments

Since the title isn't very clear, here is the abstract:

> There are currently no solutions for enabling direct face-to-face interaction between virtual reality (VR) users wearing head-mounted displays (HMDs). The main challenge is that the headset obstructs a significant portion of a user’s face, preventing effective facial capture with traditional techniques. To advance virtual reality as a nextgeneration communication platform, we develop a novel HMD that enables 3D facial performance-driven animation in real-time. Our wearable system uses ultra-thin flexible electronic materials that are mounted on the foam liner of the headset to measure surface strain signals corresponding to upper face expressions. These strain signals are combined with a head-mounted RGB-D camera to enhance the tracking in the mouth region and to account for inaccurate HMD placement. To map the input signals to a 3D face model, we perform a single-instance offline training session for each person. For reusable and accurate online operation, we propose a short calibration step to readjust the Gaussian mixture distribution of the mapping before each use. The resulting animations are visually on par with cutting-edge depth sensor-driven facial performance capture systems and hence, are suitable for social interactions in virtual worlds.

Interesting, but it seems the technology is still somewhere in the uncanny valley.

Probably because the model isn't mapped to anyone's actual face (or a character if you prefer). This can be changed.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact