Hacker News new | comments | show | ask | jobs | submit login

The "monitor you wear on your face" trope is simply inaccurate, and essentially a misunderstanding of the state of VR today.

I say this not to either criticize you or excuse the mistake by Oculus (they really needed to countersign their cert with a timestamp server), but to educate. These are non-obvious issues to people that don't follow the VR sector.

Monitors work without low-level drivers because their maturity (and lack of innovation) allows the hard stuff to be embedded in the operating system. VR is not at that state; it is emergent, and the capability stacks require additional integration into the OS. Vendors frequently add unique features, and will continue to do so for some time, making standardization difficult.

Even at its simplest level, a VR headset with 6 degrees of freedom is two monitors that must remain in absolute synchronization while also returning positional information to the CPU. This alone is enough to go beyond "standard monitor driver" functionality.

But there's much more. Here is a paste of a comment I made elsewhere:

Oculus (and Steam, via SteamVR) engineers a plethora of low-level code to reduce latency and add features. It's not just a monitor, but a whole set of SDKs, APIs, devices, and drivers.

For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz; the sensors (to know where you are in the room) are USB devices with tight 60 fps synchronization to LEDs on the headset; there is a custom audio stack with spacialized audio and ambisonic sound; video needs specialized warping to correct lens distortion, interpolate frames, and maintain a 90 fps image, etc.

Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.

All this needs to work across AMD and Nvidia, in Unity, Unreal, or any custom game engine. It's not off-the-shelf driver stuff.

Not to mention, the premise that monitors don't have drivers is also mistaken. They may not be necessary, but they are available[1]. And, the decision to sign kernel drivers is not a poor choice by Oculus, but a mandate from Microsoft for Windows 10 build 1607 and above.[2] A cert is, indeed, necessary to function.

Hope that was informative.

[1] http://www.aocmonitorap.com/my/download_driver.php [2] "Starting with new installations of Windows 10, version 1607, Windows will not load any new kernel mode drivers which are not signed by the Dev Portal." - https://docs.microsoft.com/en-us/windows-hardware/drivers/in...




You said a cert is required, but the footnote quote says drivers must be signed. Being signed doesn't expire. Could you rectify the discrepancy and explain why an expiring cert is a requirement for VR, your analysis (though clearly highly informed) seems spurious to me.


Good question. An expiring cert is not required for VR. It was a massive screw-up by Oculus.

Most (I won't say all) certificates expire. However, there's a huge difference between an expired certificate and one which is renders a driver invalid - and this is one of the two places Oculus erred.

When you sign a driver, you want it countersigned by a timeserver. This cryptographically assures that the cert used was valid at the time of signing, so the signature on the driver remains valid even if the signing cert expires (the crypto ensures a hacker can't just change the metadata with a hex editor). It allows the OS to confirm that the code was signed by a cert that was valid at the time of signature (even though now expired). Without it, the OS can only assume that the code was signed the same day as the validity check. Two days ago that was fine, but yesterday the signing cert expired and everything broke.

This was screw-up number one. Apparently, during the build process from Oculus's v.1.22 to 1.23 release, the timeserver countersignature was removed. This is obviously a mistake, because that took place about 30 days ago. No sane person would assume that they intentionally did something that would bring down their user base in a month.[1]

Obviously the second mistake was letting their certificate lapse. This was compounded by the fact that their update app was signed by the same cert, so they couldn't just push a quick fix (because the updater didn't work).

So in short, signatures don't expire, but the certificate used to do the signature does. With a timeserver countersignature the code would have kept running but no new code could be signed from the old (expired) cert.

Oculus missed some pretty big devops gaps, and suffered a big black eye for it.

But it had nothing to do with DRM, planned obsolescence, needing to connect to the internet, or Facebook data capture.

[1] Other commenters have mentioned that if a timeserver is down at the time of a build, it can fail to add the countersignature. Maybe that's what happened?


Great answers, thanks.

I've not looked at the MS requirements, it seems good to expect signed drivers, but a signature shows that the company made that driver at that time - that should never expire.

Sure, also have a mechanism of certification that shows if a company vouches for a piece of software currently, but using that mechanism to override a [admin level] user and forcibly disable software, that's got to be always wrong.


Rereading your question, I realize I may have not actually answered an underlying topic: what is the difference between a certificate and a signature?

The short answer is:

- a "certificate" contains a number of things: a portion of an asymmetric key (either public or private), and a ton of metadata[1] to give information about that key: validity period, algorithms used, version, etc.

- a "signature" is the result of a crypto operation on data that proves the data (a) has not changed since the operation, and (b) the person doing the signing owns the private portion of that asymmetric key.

As I said in my other message, a signature doesn't expire, but it's related directly (and generated by) the certificate used to create it. So if that creation certificate expires (or is revoked) it calls into question the validity of the signature(s) created from that certificate.

Let me know if you're interested in more background on asymmetric cryptography and the relationship between public keys and crypto, private keys and signatures, and the role of certificate authorities vs. a PGP-oriented 'web of trust'.

[1] https://en.wikipedia.org/wiki/X.509#Sample_X.509_certificate...


> So if that creation certificate expires (or is revoked) it calls into question the validity of the signature(s) created from that certificate.

Are you arguing that already-installed drivers should no longer be trusted? I can't tell.

If a cert expires at time T, the usual assumption is that forging signatures before T is not feasible (otherwise the expiration was poorly chosen), while forging signatures after T might be feasible.

If it's after T and we see a new update, we don't know whether the signature was crafted before or after T, so we should assume the latter and reject it.

But if we've already installed a driver, then we must have received its signature before T, otherwise we wouldn't have installed it at the time. So we should still continue to trust it after T.


To be clear, I'm not arguing that old, already-installed drivers should fail if not countersigned. This seems like an extreme, and customer-unfriendly failure case. However, I am saying that this appears to be the default implementation of Windows 10 build 1607+.

I won't argue it's right or wrong, actually. It's a choice, with different threat models driving different conclusions. Defining the failure modes with respect to security risks is a fraught business, and I hope Microsoft put a great deal of thought into it and has far more visibility into the risks than I. But it's what they appear to do, and we live in their world.

I argued elsewhere (in a late, top-level comment somewhere) that - if this is Windows's failure mode - MS should provide tools for devs to integrate into their build process that flags risky or mis-configured signature scenarios. This is too complicated, and used by too many non-security experts, with extreme failure modes, for it to be half-ass-able or easily done wrong.


Thanks for clarifying; I didn't realize that was Windows' behavior.


> But if we've already installed a driver, then we must have received its signature before T, otherwise we wouldn't have installed it at the time. So we should still continue to trust it after T.

And now you leave open an attack surface of "forge a signature off an old, expired cert and then fool the OS into thinking it's been installed all along."


It's because Oculus screwed up.


> Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.

Wait, is this new. I haven't used my Oculus in over 6 months because of how hard it was to interact with the desktop and a few other things while in-game. Is this standard feature now for Oculus' Framework?


This is in late beta, but anyone can opt in, try it, and opt out if it's not ready for them.

But I use it and it's amazing.

Edit: Here's the "sizzle reel": https://www.youtube.com/watch?v=SvP_RI_S-bw

Here's just someone using Home: https://www.youtube.com/watch?v=sMjlM5vFSA0

And here's a blog post about it: https://www.oculus.com/blog/rift-core-20-updates-beta-coming...


Yeah but, certs are not necessary for the oculus rift to function.


Drivers are necessary for the rift to function, and certs are necessary for drivers to function.


This is why half of the blame lies with Microsoft for following the rest of the industry into making software for grandma's protection at the detriment of software freedoms.

An enterprising user can turn off these driver signing enforcement settings but it's quite a song and dance and first you have to even be aware of it.


I'm not going to blame the world's largest desktop operating system, primarily used by the least technical users, for optimizing security over developer ease-of-use.

Besides, this is a false dichotomy - On your own comp you can self-sign the driver cert! The CA just has to be in a driver trust store.

The only people who lose out are those trying to distribute drivers to computers they have no control over and who cannot convince the user to install a certificate.


The signature of the driver doesn't expire when the certificate used does.


See my other comment replying to someone in parallel to this one.


Thanks.


So, it's basically a specialised-hacks-required-because-operating-systems-weren't-designed-with-it-in-mind-which-requires-driver-signing low-latency monitor for your face?


> For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz;

Could you share more info on this? Is it actually possible to poll the devices at that resolution from code?


I believe it is possible, though I doubt many people look at the raw data due to its limited usefulness. Without sensor fusion between the IMU (or inertial measurement unit, AKA electronic gyroscope) and various other inputs (including dead reckoning via external sensors), drift error rapidly accumulates.

So, the SDK takes all the information in directly, does its calculations, and exposes only the resulting positions and orientations for hands and head. This resulting info is what developers typically use.

Here's an excerpt from a blog post[1] regarding the IMU and sensor fusion:

> With the new Oculus VR™ sensor, we support sampling rates up to 1000hz, which minimizes the time between the player’s head movement and the game engine receiving the sensor data to roughly 2 milliseconds.

> <snip interesting info about sensor fusion>

> In addition to raw data, the Oculus SDK provides a SensorFusion class that takes care of the details, returning orientation data as either rotation matrices, quaternions, or Euler angles.

Note that this blog is from back in dev kit 2 days. It's possible that Oculus removed the ability to retrieve raw data; in my hobbyist efforts I only use Unity's integration and don't work directly against the SDK.

[1] https://www.oculus.com/blog/building-a-sensor-for-low-latenc...


Well, I am sorry to have to disagree on this. This is no rocket science, and the software support isn't that different from any standard monitor/gamepad combo. That's for the architecture, at least. Of course, latency requirements are higher. But the differences stops here.

Face it, today's VR headsets simply are monitors that you wear on your face (Head Mounted Displays). Anyone thinking otherwise is simply lying to himself to make it sound more complicated than it is. Those include a few input peripherals as well, none of them which is particularly complex (valve's lighthouse system is probably as complex as it gets).

And lastly, none of these points should require a certificate. Every computation can be done locally, without the need of an internet connection.

To be a bit more specific, let's break down the arguments (I have nothing against you, I am just interested in those):

> Monitors work without low-level drivers because their maturity (and lack of innovation) allows the hard stuff to be embedded in the operating system. VR is not at that state; it is emergent, and the capability stacks require additional integration into the OS. Vendors frequently add unique features, and will continue to do so for some time, making standardization difficult.

This is true... Somewhat. For now, the only integration that has been done in the Linux kernel is DRM (direct rendering manager) leasing [1], which allows an application to borrow full control of the peripheral, to bypass compositing. That, and making sure that compositors don't detect HMDs as normal displays (so that they don't try to display your desktop on them). Please note that none of these are actually needed if the compositor is designed to support HMDs from the ground up. Those are just niceties, and the HMD is just considered like a regular device.

> Even at its simplest level, a VR headset with 6 degrees of freedom is two monitors that must remain in absolute synchronization while also returning positional information to the CPU. This alone is enough to go beyond "standard monitor driver" functionality.

Even if those monitors are physically separate, this is likely something handled by the HMD board itself. The monitors DON'T return positional information, they just display stuff (accelerometer, gyro, compass, etc. are just other peripherals that happen to sit on the same board).

> Oculus (and Steam, via SteamVR) engineers a plethora of low-level code to reduce latency and add features. It's not just a monitor, but a whole set of SDKs, APIs, devices, and drivers.

Just like every peripheral under the sun, isn't it?

> For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz; the sensors (to know where you are in the room) are USB devices with tight 60 fps synchronization to LEDs on the headset

Believe it or not, frequency and latency are probably not the most complicated thing with the lighthouse system; these specs are actually not uncommon for USB devices (I admit that I don't have a good example in mind, though).

> there is a custom audio stack with spacialized audio and ambisonic sound; video needs specialized warping to correct lens distortion, interpolate frames, and maintain a 90 fps image, etc.

We are NOT talking about HMDs anymore at this point, and these feats have been accomplished countless times already, in various systems. The first one already exists in multiple forms of HRTF a bit everywhere, including openAL, and would probably be a lot more common if Creative didn't try to sue everyone into the ground as soon as they try to do something interesting. The second thing (distortion correction) is not really complicated, and was done in Palmer Luckey's first proof of concept (or was it John Carmack who implemented it). Interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.

> Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.

Again, this has nothing to do with HMDs. But, congratulation, you just wrote another compositor, and reinvented multitasking. This has been done countless times, and VR compositors have been made by multiple teams. Here is a nice open source one: [2].

> All this needs to work across AMD and Nvidia, in Unity, Unreal, or any custom game engine. It's not off-the-shelf driver stuff.

Well, so has: controller support, graphics API support (woops, actually the two only thing needed), but also language support, processor architecture support, sound system support, operating system support, etc. Everyone needs a bit of code to support new architectures. Supporting the display portion of a HMD is relatively straightforward, and actually uses off-the-shelf APIs. Well, you have to correct for distortion, but I would be surprised if some APIs didn't come out [3] to support small variations between devices.

--

To conclude, yes, it's an impressive technology stack, but you could literally pick any other device in your computer, and you would get comparable the same complexity. I am not trying to undermine the amount of work that went into HMDs and their stack, just pointing out that it's relatively common and straightforward.

And a HMD is by definition a monitor on your face :)

--

On the other hand, I just read the explanation (after writing this), and I agree that having your own kernel module makes sense for some of this (especially on Windows, on Linux you would just mainline support), if you want to make it happen faster. Yet, most of the above arguments do not serve the discussion ;)

I can get kernel drivers needing to be signed, but requiring the cert to remain valid after installation is a bit of a reach, isn't it?

Edit: thank you for the detailed explanation below.

[1] https://keithp.com/blogs/DRM-lease/

[2] https://github.com/SimulaVR/Simula

[3] https://github.com/ValveSoftware/openvr


Argument: A modern VR stack is much more complex, and does much more, than just displaying images on two screens.

Counterargument: The 16 things that happen other than just displaying images on the screen aren't relevant, have been done before, or has equivalent complexity to other systems.

Well OK. I just can't argue with that.

"A modern CPU SOC is no more than a souped up 6502."

That's true, if you ignore the integrated video, complex cache management, integration of networking/sound/northbridge/southbridge, secure enclaves, and significantly higher performance characteristics that result in subtle changes driving unexpected complexity. All of those things have been done elsewhere.

So if that's your perspective then we'll just have to agree to disagree.

Though I will point out the fact that all of those non-monitor components that you described also require custom drivers, which require their code to be signed, which was ultimately the item the OP took issue with. I'm frankly surprised that after acknowledging the amount of re-implementation VR requires, across numerous non-monitor disciplines, fusing the data in 11ms, for total motion-to-photon latency of 20ms or less, you still feel this is "common and straightforward."

But OK. I don't know your coding skill level, so this may be true.

And per this point:

> interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.

Valve has still not released an equivalent to Oculus's asynchronous spacewarp. If you feel it is "pretty doable" you would do a huge service to the SteamVR community if you could implement it and provide the code to Valve.

See https://developer.oculus.com/blog/asynchronous-spacewarp/ for details.


I would like to apologize for my previous post, I feel that it is unnecessarily long, and a bit inaccurate/exaggerated.

Let me be clear: I pretty much agree with everything you said. Only your original statement was what I felt a bit of a stretch:

> The "monitor you wear on your face" trope is simply inaccurate, and essentially a misunderstanding of the state of VR today

After reading a bit more into it, I feel that Oculus took the correct software approach to bring up its hardware on Windows. What happened appears to have been more of an oversight, one that most people probably could have felt for.

Custom (in-kernel) drivers are indeed probably a necessity to achieve the best possible experience, with the lowest attainable latency. However, they are not actually needed for basic support [1], which is where I think our misunderstanding comes form.

I realize that a tremendous amount of work has gone into making VR as realistic as it could get, and I am not trying to lessen it at all, which is what I think you wanted to point out with your original remark.

As much as I would like to have a go at implementing that kind of feature (and experiment with VR headsets in general), I don't really have the hardware nor the time to do so, unfortunately :)

--

[1] I don't know the latency involved with userspace-based USB libraries, but it seems to be low enough that Valve is using it to support the vive, at least on Linux (and for now).


Thanks, no apologies needed. I didn't mean to come off snarky either. And I obviously am not averse to unnecessarily long messages.

As an aside, Valve's tracking solution is much less USB-intensive than Oculus's.

In Valve's Lighthouse system, sensors on the HMD and controllers use the angle of a laser sweep to calculate their absolute position in a room and provide the dead reckoning needed to correct IMU drift. As a result, the only data being sent over USB is the stream of sensor data and position (I believe sensor fusion still occurs in the SDK, not on device).

Oculus's Constellation system uses IR cameras, synchronized to an IR LED array on the HMD and controllers. The entire 1080p (or 720p, if used over USB2) video images (from 2 through 4 cameras, depending on configuration) are sent via USB to the PC. This is in addition to the IMU data coming from the controllers. The SDK performs image processing to recognize the position of the LEDs in the images, triangulate their position, perform sensor fusion, and produce an absolute position.

The net result is roughly equivalent tracking between the two systems, but the USB and CPU overhead for Rift is greater (it's estimated that 1%-2% of CPU is used for image processing per sensor, but the Oculus SDK appears to have some performance advantages that allow equivalent performance on apps despite this overhead).

There is great debate over which is the more "advanced" solution. Lighthouse is wickedly clever, allowing a performant solution over larger tracking volumes with fewer cables and sensors.

Constellation is pretty brute-force, but requires highly accurate image recognition algorithms that (some say) give Oculus a leg-up in next generation tracking with no external sensors (see the Santa Cruz prototype[1] which is a PC-free device that uses 4 cameras on the HMD and on-board image processing to determine absolute position using only real-world cues). It also opens the door to full-body tracking using similar outside-in sensors.

But overall, the Valve solution definitely lends itself to a Linux implementation better than Oculus's, simply due to the lower I/O requirements. It also helps that Valve has published the Lighthouse calculations (which is just basic math), while Oculus has kept its image recognition algorithms as trade secrets.

[1] https://arstechnica.com/gaming/2017/10/wireless-oculus-vr-gr...


The certs are for drivers, not an internet connection.

Internet connection is required for updates, for instance, in case you forgot to countersign your drivers against a timeserver.

Whoops.


Whoops indeed. :-D

About a $5m whoops, considering Oculus just gave everyone a $15 store credit due to the problem.

Sometimes education is expensive.


A surgeon doesn't care about any of this.


Arguing that medical devices don't fail is specious. The procedure for reporting errors that lead to deaths can be found here: https://www.fda.gov/MedicalDevices/Safety/ReportaProblem/def...

> "Each year, the FDA receives several hundred thousand medical device reports of suspected device-associated deaths, serious injuries and malfunctions."

It is also specious to argue that a consumer product is being used for live surgeries without FDA approval.

This does not excuse the mistake, nor does it change the fact that the error will make people question the reliability of the product - as they should.

However, mistakes do happen, even big ones. Rockets blow up. Airbags have defects that make them not work. McAfee pushed out an antivirus update that deleted a Windows system file, crashing hundreds of thousands of PCs.

The important questions are: how does the vendor respond, what procedures do they put into place to prevent it from happening again, and are those procedures enough to give future buyers confidence that the issues are addressed?

Saying "that shouldn't have happened," while perhaps true, is simply not constructive.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: