Fortunately one of our engineers figured out we could get our demo rigs working by setting the clock back a few days. This could have been a huge disaster for our company if we hadn't found that workaround though. Pretty annoyed with Oculus about this
I say this not to either criticize you or excuse the mistake by Oculus (they really needed to countersign their cert with a timestamp server), but to educate. These are non-obvious issues to people that don't follow the VR sector.
Monitors work without low-level drivers because their maturity (and lack of innovation) allows the hard stuff to be embedded in the operating system. VR is not at that state; it is emergent, and the capability stacks require additional integration into the OS. Vendors frequently add unique features, and will continue to do so for some time, making standardization difficult.
Even at its simplest level, a VR headset with 6 degrees of freedom is two monitors that must remain in absolute synchronization while also returning positional information to the CPU. This alone is enough to go beyond "standard monitor driver" functionality.
But there's much more. Here is a paste of a comment I made elsewhere:
Oculus (and Steam, via SteamVR) engineers a plethora of low-level code to reduce latency and add features. It's not just a monitor, but a whole set of SDKs, APIs, devices, and drivers.
For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz; the sensors (to know where you are in the room) are USB devices with tight 60 fps synchronization to LEDs on the headset; there is a custom audio stack with spacialized audio and ambisonic sound; video needs specialized warping to correct lens distortion, interpolate frames, and maintain a 90 fps image, etc.
Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.
All this needs to work across AMD and Nvidia, in Unity, Unreal, or any custom game engine. It's not off-the-shelf driver stuff.
Not to mention, the premise that monitors don't have drivers is also mistaken. They may not be necessary, but they are available. And, the decision to sign kernel drivers is not a poor choice by Oculus, but a mandate from Microsoft for Windows 10 build 1607 and above. A cert is, indeed, necessary to function.
Hope that was informative.
 "Starting with new installations of Windows 10, version 1607, Windows will not load any new kernel mode drivers which are not signed by the Dev Portal." - https://docs.microsoft.com/en-us/windows-hardware/drivers/in...
Most (I won't say all) certificates expire. However, there's a huge difference between an expired certificate and one which is renders a driver invalid - and this is one of the two places Oculus erred.
When you sign a driver, you want it countersigned by a timeserver. This cryptographically assures that the cert used was valid at the time of signing, so the signature on the driver remains valid even if the signing cert expires (the crypto ensures a hacker can't just change the metadata with a hex editor). It allows the OS to confirm that the code was signed by a cert that was valid at the time of signature (even though now expired). Without it, the OS can only assume that the code was signed the same day as the validity check. Two days ago that was fine, but yesterday the signing cert expired and everything broke.
This was screw-up number one. Apparently, during the build process from Oculus's v.1.22 to 1.23 release, the timeserver countersignature was removed. This is obviously a mistake, because that took place about 30 days ago. No sane person would assume that they intentionally did something that would bring down their user base in a month.
Obviously the second mistake was letting their certificate lapse. This was compounded by the fact that their update app was signed by the same cert, so they couldn't just push a quick fix (because the updater didn't work).
So in short, signatures don't expire, but the certificate used to do the signature does. With a timeserver countersignature the code would have kept running but no new code could be signed from the old (expired) cert.
Oculus missed some pretty big devops gaps, and suffered a big black eye for it.
But it had nothing to do with DRM, planned obsolescence, needing to connect to the internet, or Facebook data capture.
 Other commenters have mentioned that if a timeserver is down at the time of a build, it can fail to add the countersignature. Maybe that's what happened?
I've not looked at the MS requirements, it seems good to expect signed drivers, but a signature shows that the company made that driver at that time - that should never expire.
Sure, also have a mechanism of certification that shows if a company vouches for a piece of software currently, but using that mechanism to override a [admin level] user and forcibly disable software, that's got to be always wrong.
The short answer is:
- a "certificate" contains a number of things: a portion of an asymmetric key (either public or private), and a ton of metadata to give information about that key: validity period, algorithms used, version, etc.
- a "signature" is the result of a crypto operation on data that proves the data (a) has not changed since the operation, and (b) the person doing the signing owns the private portion of that asymmetric key.
As I said in my other message, a signature doesn't expire, but it's related directly (and generated by) the certificate used to create it. So if that creation certificate expires (or is revoked) it calls into question the validity of the signature(s) created from that certificate.
Let me know if you're interested in more background on asymmetric cryptography and the relationship between public keys and crypto, private keys and signatures, and the role of certificate authorities vs. a PGP-oriented 'web of trust'.
Are you arguing that already-installed drivers should no longer be trusted? I can't tell.
If a cert expires at time T, the usual assumption is that forging signatures before T is not feasible (otherwise the expiration was poorly chosen), while forging signatures after T might be feasible.
If it's after T and we see a new update, we don't know whether the signature was crafted before or after T, so we should assume the latter and reject it.
But if we've already installed a driver, then we must have received its signature before T, otherwise we wouldn't have installed it at the time. So we should still continue to trust it after T.
I won't argue it's right or wrong, actually. It's a choice, with different threat models driving different conclusions. Defining the failure modes with respect to security risks is a fraught business, and I hope Microsoft put a great deal of thought into it and has far more visibility into the risks than I. But it's what they appear to do, and we live in their world.
I argued elsewhere (in a late, top-level comment somewhere) that - if this is Windows's failure mode - MS should provide tools for devs to integrate into their build process that flags risky or mis-configured signature scenarios. This is too complicated, and used by too many non-security experts, with extreme failure modes, for it to be half-ass-able or easily done wrong.
And now you leave open an attack surface of "forge a signature off an old, expired cert and then fool the OS into thinking it's been installed all along."
Wait, is this new. I haven't used my Oculus in over 6 months because of how hard it was to interact with the desktop and a few other things while in-game. Is this standard feature now for Oculus' Framework?
But I use it and it's amazing.
Here's the "sizzle reel": https://www.youtube.com/watch?v=SvP_RI_S-bw
Here's just someone using Home: https://www.youtube.com/watch?v=sMjlM5vFSA0
And here's a blog post about it: https://www.oculus.com/blog/rift-core-20-updates-beta-coming...
An enterprising user can turn off these driver signing enforcement settings but it's quite a song and dance and first you have to even be aware of it.
Besides, this is a false dichotomy - On your own comp you can self-sign the driver cert! The CA just has to be in a driver trust store.
The only people who lose out are those trying to distribute drivers to computers they have no control over and who cannot convince the user to install a certificate.
Could you share more info on this? Is it actually possible to poll the devices at that resolution from code?
So, the SDK takes all the information in directly, does its calculations, and exposes only the resulting positions and orientations for hands and head. This resulting info is what developers typically use.
Here's an excerpt from a blog post regarding the IMU and sensor fusion:
> With the new Oculus VR™ sensor, we support sampling rates up to 1000hz, which minimizes the time between the player’s head movement and the game engine receiving the sensor data to roughly 2 milliseconds.
> <snip interesting info about sensor fusion>
> In addition to raw data, the Oculus SDK provides a SensorFusion class that takes care of the details, returning orientation data as either rotation matrices, quaternions, or Euler angles.
Note that this blog is from back in dev kit 2 days. It's possible that Oculus removed the ability to retrieve raw data; in my hobbyist efforts I only use Unity's integration and don't work directly against the SDK.
Face it, today's VR headsets simply are monitors that you wear on your face (Head Mounted Displays). Anyone thinking otherwise is simply lying to himself to make it sound more complicated than it is.
Those include a few input peripherals as well, none of them which is particularly complex (valve's lighthouse system is probably as complex as it gets).
And lastly, none of these points should require a certificate. Every computation can be done locally, without the need of an internet connection.
To be a bit more specific, let's break down the arguments (I have nothing against you, I am just interested in those):
> Monitors work without low-level drivers because their maturity (and lack of innovation) allows the hard stuff to be embedded in the operating system. VR is not at that state; it is emergent, and the capability stacks require additional integration into the OS. Vendors frequently add unique features, and will continue to do so for some time, making standardization difficult.
This is true... Somewhat. For now, the only integration that has been done in the Linux kernel is DRM (direct rendering manager) leasing , which allows an application to borrow full control of the peripheral, to bypass compositing. That, and making sure that compositors don't detect HMDs as normal displays (so that they don't try to display your desktop on them). Please note that none of these are actually needed if the compositor is designed to support HMDs from the ground up. Those are just niceties, and the HMD is just considered like a regular device.
> Even at its simplest level, a VR headset with 6 degrees of freedom is two monitors that must remain in absolute synchronization while also returning positional information to the CPU. This alone is enough to go beyond "standard monitor driver" functionality.
Even if those monitors are physically separate, this is likely something handled by the HMD board itself. The monitors DON'T return positional information, they just display stuff (accelerometer, gyro, compass, etc. are just other peripherals that happen to sit on the same board).
> Oculus (and Steam, via SteamVR) engineers a plethora of low-level code to reduce latency and add features. It's not just a monitor, but a whole set of SDKs, APIs, devices, and drivers.
Just like every peripheral under the sun, isn't it?
> For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz; the sensors (to know where you are in the room) are USB devices with tight 60 fps synchronization to LEDs on the headset
Believe it or not, frequency and latency are probably not the most complicated thing with the lighthouse system; these specs are actually not uncommon for USB devices (I admit that I don't have a good example in mind, though).
> there is a custom audio stack with spacialized audio and ambisonic sound; video needs specialized warping to correct lens distortion, interpolate frames, and maintain a 90 fps image, etc.
We are NOT talking about HMDs anymore at this point, and these feats have been accomplished countless times already, in various systems.
The first one already exists in multiple forms of HRTF a bit everywhere, including openAL, and would probably be a lot more common if Creative didn't try to sue everyone into the ground as soon as they try to do something interesting. The second thing (distortion correction) is not really complicated, and was done in Palmer Luckey's first proof of concept (or was it John Carmack who implemented it). Interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.
> Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.
Again, this has nothing to do with HMDs. But, congratulation, you just wrote another compositor, and reinvented multitasking. This has been done countless times, and VR compositors have been made by multiple teams. Here is a nice open source one: .
> All this needs to work across AMD and Nvidia, in Unity, Unreal, or any custom game engine. It's not off-the-shelf driver stuff.
Well, so has: controller support, graphics API support (woops, actually the two only thing needed), but also language support, processor architecture support, sound system support, operating system support, etc.
Everyone needs a bit of code to support new architectures. Supporting the display portion of a HMD is relatively straightforward, and actually uses off-the-shelf APIs. Well, you have to correct for distortion, but I would be surprised if some APIs didn't come out  to support small variations between devices.
To conclude, yes, it's an impressive technology stack, but you could literally pick any other device in your computer, and you would get comparable the same complexity. I am not trying to undermine the amount of work that went into HMDs and their stack, just pointing out that it's relatively common and straightforward.
And a HMD is by definition a monitor on your face :)
On the other hand, I just read the explanation (after writing this), and I agree that having your own kernel module makes sense for some of this (especially on Windows, on Linux you would just mainline support), if you want to make it happen faster. Yet, most of the above arguments do not serve the discussion ;)
I can get kernel drivers needing to be signed, but requiring the cert to remain valid after installation is a bit of a reach, isn't it?
Edit: thank you for the detailed explanation below.
Counterargument: The 16 things that happen other than just displaying images on the screen aren't relevant, have been done before, or has equivalent complexity to other systems.
Well OK. I just can't argue with that.
"A modern CPU SOC is no more than a souped up 6502."
That's true, if you ignore the integrated video, complex cache management, integration of networking/sound/northbridge/southbridge, secure enclaves, and significantly higher performance characteristics that result in subtle changes driving unexpected complexity. All of those things have been done elsewhere.
So if that's your perspective then we'll just have to agree to disagree.
Though I will point out the fact that all of those non-monitor components that you described also require custom drivers, which require their code to be signed, which was ultimately the item the OP took issue with. I'm frankly surprised that after acknowledging the amount of re-implementation VR requires, across numerous non-monitor disciplines, fusing the data in 11ms, for total motion-to-photon latency of 20ms or less, you still feel this is "common and straightforward."
But OK. I don't know your coding skill level, so this may be true.
And per this point:
> interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.
Valve has still not released an equivalent to Oculus's asynchronous spacewarp. If you feel it is "pretty doable" you would do a huge service to the SteamVR community if you could implement it and provide the code to Valve.
See https://developer.oculus.com/blog/asynchronous-spacewarp/ for details.
Let me be clear: I pretty much agree with everything you said. Only your original statement was what I felt a bit of a stretch:
> The "monitor you wear on your face" trope is simply inaccurate, and essentially a misunderstanding of the state of VR today
After reading a bit more into it, I feel that Oculus took the correct software approach to bring up its hardware on Windows. What happened appears to have been more of an oversight, one that most people probably could have felt for.
Custom (in-kernel) drivers are indeed probably a necessity to achieve the best possible experience, with the lowest attainable latency. However, they are not actually needed for basic support , which is where I think our misunderstanding comes form.
I realize that a tremendous amount of work has gone into making VR as realistic as it could get, and I am not trying to lessen it at all, which is what I think you wanted to point out with your original remark.
As much as I would like to have a go at implementing that kind of feature (and experiment with VR headsets in general), I don't really have the hardware nor the time to do so, unfortunately :)
 I don't know the latency involved with userspace-based USB libraries, but it seems to be low enough that Valve is using it to support the vive, at least on Linux (and for now).
As an aside, Valve's tracking solution is much less USB-intensive than Oculus's.
In Valve's Lighthouse system, sensors on the HMD and controllers use the angle of a laser sweep to calculate their absolute position in a room and provide the dead reckoning needed to correct IMU drift. As a result, the only data being sent over USB is the stream of sensor data and position (I believe sensor fusion still occurs in the SDK, not on device).
Oculus's Constellation system uses IR cameras, synchronized to an IR LED array on the HMD and controllers. The entire 1080p (or 720p, if used over USB2) video images (from 2 through 4 cameras, depending on configuration) are sent via USB to the PC. This is in addition to the IMU data coming from the controllers. The SDK performs image processing to recognize the position of the LEDs in the images, triangulate their position, perform sensor fusion, and produce an absolute position.
The net result is roughly equivalent tracking between the two systems, but the USB and CPU overhead for Rift is greater (it's estimated that 1%-2% of CPU is used for image processing per sensor, but the Oculus SDK appears to have some performance advantages that allow equivalent performance on apps despite this overhead).
There is great debate over which is the more "advanced" solution. Lighthouse is wickedly clever, allowing a performant solution over larger tracking volumes with fewer cables and sensors.
Constellation is pretty brute-force, but requires highly accurate image recognition algorithms that (some say) give Oculus a leg-up in next generation tracking with no external sensors (see the Santa Cruz prototype which is a PC-free device that uses 4 cameras on the HMD and on-board image processing to determine absolute position using only real-world cues). It also opens the door to full-body tracking using similar outside-in sensors.
But overall, the Valve solution definitely lends itself to a Linux implementation better than Oculus's, simply due to the lower I/O requirements. It also helps that Valve has published the Lighthouse calculations (which is just basic math), while Oculus has kept its image recognition algorithms as trade secrets.
Internet connection is required for updates, for instance, in case you forgot to countersign your drivers against a timeserver.
About a $5m whoops, considering Oculus just gave everyone a $15 store credit due to the problem.
Sometimes education is expensive.
> "Each year, the FDA receives several hundred thousand medical device reports of suspected device-associated deaths, serious injuries and malfunctions."
It is also specious to argue that a consumer product is being used for live surgeries without FDA approval.
This does not excuse the mistake, nor does it change the fact that the error will make people question the reliability of the product - as they should.
However, mistakes do happen, even big ones. Rockets blow up. Airbags have defects that make them not work. McAfee pushed out an antivirus update that deleted a Windows system file, crashing hundreds of thousands of PCs.
The important questions are: how does the vendor respond, what procedures do they put into place to prevent it from happening again, and are those procedures enough to give future buyers confidence that the issues are addressed?
Saying "that shouldn't have happened," while perhaps true, is simply not constructive.
I would like to see more companies write a Thank You letter from the CEO, signed by his managers. Something that he could use during his performance evaluations at the company, or attach to his resume for any other jobs.
It's hard to get concrete evidence like that, which shows your value to the company. It would great to have documentation that could never be forgotten.
It seems ridiculous in this modern age, but there are a huge number of people who will never bother to look into their problems on their own before asking someone else. Then this other person does a simple Google search and becomes the hero expert.
This all too often results in further dependence, with no real reward for the guy who took this basic step except more requests in the future. If this one guy can get a day off in this instance, it'll be a victory for every person who has ever said "Oh, if you google that, you'll see one of the first results with instructions to do x, y, z." to a time-draining coworker.
Also, a lot of problems have their search engine results "poisoned" by solutions for lesser, but superficially similar problems that are worked to death by SEO content farms competing for attention.
I even seem to recall one that when I set the clock back much more than 30 days gave me as many more days beyond the 30 as as much as I had set it back.
Then there were a couple pieces of software that would detect such trickery and which would punish you by taking away the time you had left also if you set back the date before the 30 days were up.
Anyway, with this in mind, the first thing I thought when I read the headlines was, “I wonder if one can get around this by setting the clock back”, and I doubt I was alone in that, so to say that it “probably wasn’t his idea”... I dunno man.
How low has the SW development bar gone, if "it's okay" now means "at least it's not directly killing people"?
I though that was the way ever since OS/2 failed. Getting stuff out to customers has priority over quality control.
The bug is now patched, so the downtime appears to be less than 24 hours from discovery to fix. The original error is clearly a major blunder, but Oculus have responded properly.
It gets that low every time a hospital underfunds IT staff and makes horrible project management decisions and product buying decisions.
I've seen that first hand. There's a bunch of corpses at the IT entrance of people who've tried to turn that around.
In other words, comparing to the worst possible outcome is, by definition, not a very high bar.
Something like this probably will happen with computer assisted surgery or medical procedures, or an aircraft in flight. Just a matter of time.
By the way, do you have any links to your surgical training startup? I'm doing some research into VR/AR for surgical telementoring and training and would be interested in seeing how it's in use.
We've had a fair amount of press coverage in specialist press recently so a search for Osso VR should turn up some recent articles too.
Wow. That guy sounds interesting...
It doesn't help that vendors are generally nervous about liability in medical equipment (this fear is often unfounded, but persistent). As a result, vendors of commercial and industrial equipment generally don't want to engage medical device OEMs with engineering and customization support. If there had been that sort of support in this case, Oculus might have made a custom build without the cert check, just as a de-risking measure.
This vendor reluctance is especially present at the FDA Class III (high risk device) level - most vendors outright prohibit use of their devices in these products. It's an open secret that this still happens anyway in a wink-wink nudge-nudge fashion, just without vendor support - which is arguably worse, but it keeps the lawyers happy.
The real MVP of this story. Sometimes a dirty hack is good enough.