Hacker News new | comments | show | ask | jobs | submit login

Our VR surgical training startup has been working for the last few months towards a big medical conference this week where we're showing multiple training procedures for multiple customers on Oculus Rift, as well as having our own booth. The headsets all stopped working the morning of the conference.

Fortunately one of our engineers figured out we could get our demo rigs working by setting the clock back a few days. This could have been a huge disaster for our company if we hadn't found that workaround though. Pretty annoyed with Oculus about this




This does not bode well for real VR surgery. Imagine if this were surgery day for someone, and because of an expiring certificate the rift shuts down ...


It's not like a cert is necessary for it to function. A VR headset is basically a monitor you wear on your face. This is their own poor design choice that just ensures they're going to lose business of anyone who needs reliability in their headset.


The "monitor you wear on your face" trope is simply inaccurate, and essentially a misunderstanding of the state of VR today.

I say this not to either criticize you or excuse the mistake by Oculus (they really needed to countersign their cert with a timestamp server), but to educate. These are non-obvious issues to people that don't follow the VR sector.

Monitors work without low-level drivers because their maturity (and lack of innovation) allows the hard stuff to be embedded in the operating system. VR is not at that state; it is emergent, and the capability stacks require additional integration into the OS. Vendors frequently add unique features, and will continue to do so for some time, making standardization difficult.

Even at its simplest level, a VR headset with 6 degrees of freedom is two monitors that must remain in absolute synchronization while also returning positional information to the CPU. This alone is enough to go beyond "standard monitor driver" functionality.

But there's much more. Here is a paste of a comment I made elsewhere:

Oculus (and Steam, via SteamVR) engineers a plethora of low-level code to reduce latency and add features. It's not just a monitor, but a whole set of SDKs, APIs, devices, and drivers.

For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz; the sensors (to know where you are in the room) are USB devices with tight 60 fps synchronization to LEDs on the headset; there is a custom audio stack with spacialized audio and ambisonic sound; video needs specialized warping to correct lens distortion, interpolate frames, and maintain a 90 fps image, etc.

Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.

All this needs to work across AMD and Nvidia, in Unity, Unreal, or any custom game engine. It's not off-the-shelf driver stuff.

Not to mention, the premise that monitors don't have drivers is also mistaken. They may not be necessary, but they are available[1]. And, the decision to sign kernel drivers is not a poor choice by Oculus, but a mandate from Microsoft for Windows 10 build 1607 and above.[2] A cert is, indeed, necessary to function.

Hope that was informative.

[1] http://www.aocmonitorap.com/my/download_driver.php [2] "Starting with new installations of Windows 10, version 1607, Windows will not load any new kernel mode drivers which are not signed by the Dev Portal." - https://docs.microsoft.com/en-us/windows-hardware/drivers/in...


You said a cert is required, but the footnote quote says drivers must be signed. Being signed doesn't expire. Could you rectify the discrepancy and explain why an expiring cert is a requirement for VR, your analysis (though clearly highly informed) seems spurious to me.


Good question. An expiring cert is not required for VR. It was a massive screw-up by Oculus.

Most (I won't say all) certificates expire. However, there's a huge difference between an expired certificate and one which is renders a driver invalid - and this is one of the two places Oculus erred.

When you sign a driver, you want it countersigned by a timeserver. This cryptographically assures that the cert used was valid at the time of signing, so the signature on the driver remains valid even if the signing cert expires (the crypto ensures a hacker can't just change the metadata with a hex editor). It allows the OS to confirm that the code was signed by a cert that was valid at the time of signature (even though now expired). Without it, the OS can only assume that the code was signed the same day as the validity check. Two days ago that was fine, but yesterday the signing cert expired and everything broke.

This was screw-up number one. Apparently, during the build process from Oculus's v.1.22 to 1.23 release, the timeserver countersignature was removed. This is obviously a mistake, because that took place about 30 days ago. No sane person would assume that they intentionally did something that would bring down their user base in a month.[1]

Obviously the second mistake was letting their certificate lapse. This was compounded by the fact that their update app was signed by the same cert, so they couldn't just push a quick fix (because the updater didn't work).

So in short, signatures don't expire, but the certificate used to do the signature does. With a timeserver countersignature the code would have kept running but no new code could be signed from the old (expired) cert.

Oculus missed some pretty big devops gaps, and suffered a big black eye for it.

But it had nothing to do with DRM, planned obsolescence, needing to connect to the internet, or Facebook data capture.

[1] Other commenters have mentioned that if a timeserver is down at the time of a build, it can fail to add the countersignature. Maybe that's what happened?


Great answers, thanks.

I've not looked at the MS requirements, it seems good to expect signed drivers, but a signature shows that the company made that driver at that time - that should never expire.

Sure, also have a mechanism of certification that shows if a company vouches for a piece of software currently, but using that mechanism to override a [admin level] user and forcibly disable software, that's got to be always wrong.


Rereading your question, I realize I may have not actually answered an underlying topic: what is the difference between a certificate and a signature?

The short answer is:

- a "certificate" contains a number of things: a portion of an asymmetric key (either public or private), and a ton of metadata[1] to give information about that key: validity period, algorithms used, version, etc.

- a "signature" is the result of a crypto operation on data that proves the data (a) has not changed since the operation, and (b) the person doing the signing owns the private portion of that asymmetric key.

As I said in my other message, a signature doesn't expire, but it's related directly (and generated by) the certificate used to create it. So if that creation certificate expires (or is revoked) it calls into question the validity of the signature(s) created from that certificate.

Let me know if you're interested in more background on asymmetric cryptography and the relationship between public keys and crypto, private keys and signatures, and the role of certificate authorities vs. a PGP-oriented 'web of trust'.

[1] https://en.wikipedia.org/wiki/X.509#Sample_X.509_certificate...


> So if that creation certificate expires (or is revoked) it calls into question the validity of the signature(s) created from that certificate.

Are you arguing that already-installed drivers should no longer be trusted? I can't tell.

If a cert expires at time T, the usual assumption is that forging signatures before T is not feasible (otherwise the expiration was poorly chosen), while forging signatures after T might be feasible.

If it's after T and we see a new update, we don't know whether the signature was crafted before or after T, so we should assume the latter and reject it.

But if we've already installed a driver, then we must have received its signature before T, otherwise we wouldn't have installed it at the time. So we should still continue to trust it after T.


To be clear, I'm not arguing that old, already-installed drivers should fail if not countersigned. This seems like an extreme, and customer-unfriendly failure case. However, I am saying that this appears to be the default implementation of Windows 10 build 1607+.

I won't argue it's right or wrong, actually. It's a choice, with different threat models driving different conclusions. Defining the failure modes with respect to security risks is a fraught business, and I hope Microsoft put a great deal of thought into it and has far more visibility into the risks than I. But it's what they appear to do, and we live in their world.

I argued elsewhere (in a late, top-level comment somewhere) that - if this is Windows's failure mode - MS should provide tools for devs to integrate into their build process that flags risky or mis-configured signature scenarios. This is too complicated, and used by too many non-security experts, with extreme failure modes, for it to be half-ass-able or easily done wrong.


Thanks for clarifying; I didn't realize that was Windows' behavior.


> But if we've already installed a driver, then we must have received its signature before T, otherwise we wouldn't have installed it at the time. So we should still continue to trust it after T.

And now you leave open an attack surface of "forge a signature off an old, expired cert and then fool the OS into thinking it's been installed all along."


It's because Oculus screwed up.


> Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.

Wait, is this new. I haven't used my Oculus in over 6 months because of how hard it was to interact with the desktop and a few other things while in-game. Is this standard feature now for Oculus' Framework?


This is in late beta, but anyone can opt in, try it, and opt out if it's not ready for them.

But I use it and it's amazing.

Edit: Here's the "sizzle reel": https://www.youtube.com/watch?v=SvP_RI_S-bw

Here's just someone using Home: https://www.youtube.com/watch?v=sMjlM5vFSA0

And here's a blog post about it: https://www.oculus.com/blog/rift-core-20-updates-beta-coming...


Yeah but, certs are not necessary for the oculus rift to function.


Drivers are necessary for the rift to function, and certs are necessary for drivers to function.


This is why half of the blame lies with Microsoft for following the rest of the industry into making software for grandma's protection at the detriment of software freedoms.

An enterprising user can turn off these driver signing enforcement settings but it's quite a song and dance and first you have to even be aware of it.


I'm not going to blame the world's largest desktop operating system, primarily used by the least technical users, for optimizing security over developer ease-of-use.

Besides, this is a false dichotomy - On your own comp you can self-sign the driver cert! The CA just has to be in a driver trust store.

The only people who lose out are those trying to distribute drivers to computers they have no control over and who cannot convince the user to install a certificate.


The signature of the driver doesn't expire when the certificate used does.


See my other comment replying to someone in parallel to this one.


Thanks.


So, it's basically a specialised-hacks-required-because-operating-systems-weren't-designed-with-it-in-mind-which-requires-driver-signing low-latency monitor for your face?


> For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz;

Could you share more info on this? Is it actually possible to poll the devices at that resolution from code?


I believe it is possible, though I doubt many people look at the raw data due to its limited usefulness. Without sensor fusion between the IMU (or inertial measurement unit, AKA electronic gyroscope) and various other inputs (including dead reckoning via external sensors), drift error rapidly accumulates.

So, the SDK takes all the information in directly, does its calculations, and exposes only the resulting positions and orientations for hands and head. This resulting info is what developers typically use.

Here's an excerpt from a blog post[1] regarding the IMU and sensor fusion:

> With the new Oculus VR™ sensor, we support sampling rates up to 1000hz, which minimizes the time between the player’s head movement and the game engine receiving the sensor data to roughly 2 milliseconds.

> <snip interesting info about sensor fusion>

> In addition to raw data, the Oculus SDK provides a SensorFusion class that takes care of the details, returning orientation data as either rotation matrices, quaternions, or Euler angles.

Note that this blog is from back in dev kit 2 days. It's possible that Oculus removed the ability to retrieve raw data; in my hobbyist efforts I only use Unity's integration and don't work directly against the SDK.

[1] https://www.oculus.com/blog/building-a-sensor-for-low-latenc...


Well, I am sorry to have to disagree on this. This is no rocket science, and the software support isn't that different from any standard monitor/gamepad combo. That's for the architecture, at least. Of course, latency requirements are higher. But the differences stops here.

Face it, today's VR headsets simply are monitors that you wear on your face (Head Mounted Displays). Anyone thinking otherwise is simply lying to himself to make it sound more complicated than it is. Those include a few input peripherals as well, none of them which is particularly complex (valve's lighthouse system is probably as complex as it gets).

And lastly, none of these points should require a certificate. Every computation can be done locally, without the need of an internet connection.

To be a bit more specific, let's break down the arguments (I have nothing against you, I am just interested in those):

> Monitors work without low-level drivers because their maturity (and lack of innovation) allows the hard stuff to be embedded in the operating system. VR is not at that state; it is emergent, and the capability stacks require additional integration into the OS. Vendors frequently add unique features, and will continue to do so for some time, making standardization difficult.

This is true... Somewhat. For now, the only integration that has been done in the Linux kernel is DRM (direct rendering manager) leasing [1], which allows an application to borrow full control of the peripheral, to bypass compositing. That, and making sure that compositors don't detect HMDs as normal displays (so that they don't try to display your desktop on them). Please note that none of these are actually needed if the compositor is designed to support HMDs from the ground up. Those are just niceties, and the HMD is just considered like a regular device.

> Even at its simplest level, a VR headset with 6 degrees of freedom is two monitors that must remain in absolute synchronization while also returning positional information to the CPU. This alone is enough to go beyond "standard monitor driver" functionality.

Even if those monitors are physically separate, this is likely something handled by the HMD board itself. The monitors DON'T return positional information, they just display stuff (accelerometer, gyro, compass, etc. are just other peripherals that happen to sit on the same board).

> Oculus (and Steam, via SteamVR) engineers a plethora of low-level code to reduce latency and add features. It's not just a monitor, but a whole set of SDKs, APIs, devices, and drivers.

Just like every peripheral under the sun, isn't it?

> For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz; the sensors (to know where you are in the room) are USB devices with tight 60 fps synchronization to LEDs on the headset

Believe it or not, frequency and latency are probably not the most complicated thing with the lighthouse system; these specs are actually not uncommon for USB devices (I admit that I don't have a good example in mind, though).

> there is a custom audio stack with spacialized audio and ambisonic sound; video needs specialized warping to correct lens distortion, interpolate frames, and maintain a 90 fps image, etc.

We are NOT talking about HMDs anymore at this point, and these feats have been accomplished countless times already, in various systems. The first one already exists in multiple forms of HRTF a bit everywhere, including openAL, and would probably be a lot more common if Creative didn't try to sue everyone into the ground as soon as they try to do something interesting. The second thing (distortion correction) is not really complicated, and was done in Palmer Luckey's first proof of concept (or was it John Carmack who implemented it). Interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.

> Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.

Again, this has nothing to do with HMDs. But, congratulation, you just wrote another compositor, and reinvented multitasking. This has been done countless times, and VR compositors have been made by multiple teams. Here is a nice open source one: [2].

> All this needs to work across AMD and Nvidia, in Unity, Unreal, or any custom game engine. It's not off-the-shelf driver stuff.

Well, so has: controller support, graphics API support (woops, actually the two only thing needed), but also language support, processor architecture support, sound system support, operating system support, etc. Everyone needs a bit of code to support new architectures. Supporting the display portion of a HMD is relatively straightforward, and actually uses off-the-shelf APIs. Well, you have to correct for distortion, but I would be surprised if some APIs didn't come out [3] to support small variations between devices.

--

To conclude, yes, it's an impressive technology stack, but you could literally pick any other device in your computer, and you would get comparable the same complexity. I am not trying to undermine the amount of work that went into HMDs and their stack, just pointing out that it's relatively common and straightforward.

And a HMD is by definition a monitor on your face :)

--

On the other hand, I just read the explanation (after writing this), and I agree that having your own kernel module makes sense for some of this (especially on Windows, on Linux you would just mainline support), if you want to make it happen faster. Yet, most of the above arguments do not serve the discussion ;)

I can get kernel drivers needing to be signed, but requiring the cert to remain valid after installation is a bit of a reach, isn't it?

Edit: thank you for the detailed explanation below.

[1] https://keithp.com/blogs/DRM-lease/

[2] https://github.com/SimulaVR/Simula

[3] https://github.com/ValveSoftware/openvr


Argument: A modern VR stack is much more complex, and does much more, than just displaying images on two screens.

Counterargument: The 16 things that happen other than just displaying images on the screen aren't relevant, have been done before, or has equivalent complexity to other systems.

Well OK. I just can't argue with that.

"A modern CPU SOC is no more than a souped up 6502."

That's true, if you ignore the integrated video, complex cache management, integration of networking/sound/northbridge/southbridge, secure enclaves, and significantly higher performance characteristics that result in subtle changes driving unexpected complexity. All of those things have been done elsewhere.

So if that's your perspective then we'll just have to agree to disagree.

Though I will point out the fact that all of those non-monitor components that you described also require custom drivers, which require their code to be signed, which was ultimately the item the OP took issue with. I'm frankly surprised that after acknowledging the amount of re-implementation VR requires, across numerous non-monitor disciplines, fusing the data in 11ms, for total motion-to-photon latency of 20ms or less, you still feel this is "common and straightforward."

But OK. I don't know your coding skill level, so this may be true.

And per this point:

> interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.

Valve has still not released an equivalent to Oculus's asynchronous spacewarp. If you feel it is "pretty doable" you would do a huge service to the SteamVR community if you could implement it and provide the code to Valve.

See https://developer.oculus.com/blog/asynchronous-spacewarp/ for details.


I would like to apologize for my previous post, I feel that it is unnecessarily long, and a bit inaccurate/exaggerated.

Let me be clear: I pretty much agree with everything you said. Only your original statement was what I felt a bit of a stretch:

> The "monitor you wear on your face" trope is simply inaccurate, and essentially a misunderstanding of the state of VR today

After reading a bit more into it, I feel that Oculus took the correct software approach to bring up its hardware on Windows. What happened appears to have been more of an oversight, one that most people probably could have felt for.

Custom (in-kernel) drivers are indeed probably a necessity to achieve the best possible experience, with the lowest attainable latency. However, they are not actually needed for basic support [1], which is where I think our misunderstanding comes form.

I realize that a tremendous amount of work has gone into making VR as realistic as it could get, and I am not trying to lessen it at all, which is what I think you wanted to point out with your original remark.

As much as I would like to have a go at implementing that kind of feature (and experiment with VR headsets in general), I don't really have the hardware nor the time to do so, unfortunately :)

--

[1] I don't know the latency involved with userspace-based USB libraries, but it seems to be low enough that Valve is using it to support the vive, at least on Linux (and for now).


Thanks, no apologies needed. I didn't mean to come off snarky either. And I obviously am not averse to unnecessarily long messages.

As an aside, Valve's tracking solution is much less USB-intensive than Oculus's.

In Valve's Lighthouse system, sensors on the HMD and controllers use the angle of a laser sweep to calculate their absolute position in a room and provide the dead reckoning needed to correct IMU drift. As a result, the only data being sent over USB is the stream of sensor data and position (I believe sensor fusion still occurs in the SDK, not on device).

Oculus's Constellation system uses IR cameras, synchronized to an IR LED array on the HMD and controllers. The entire 1080p (or 720p, if used over USB2) video images (from 2 through 4 cameras, depending on configuration) are sent via USB to the PC. This is in addition to the IMU data coming from the controllers. The SDK performs image processing to recognize the position of the LEDs in the images, triangulate their position, perform sensor fusion, and produce an absolute position.

The net result is roughly equivalent tracking between the two systems, but the USB and CPU overhead for Rift is greater (it's estimated that 1%-2% of CPU is used for image processing per sensor, but the Oculus SDK appears to have some performance advantages that allow equivalent performance on apps despite this overhead).

There is great debate over which is the more "advanced" solution. Lighthouse is wickedly clever, allowing a performant solution over larger tracking volumes with fewer cables and sensors.

Constellation is pretty brute-force, but requires highly accurate image recognition algorithms that (some say) give Oculus a leg-up in next generation tracking with no external sensors (see the Santa Cruz prototype[1] which is a PC-free device that uses 4 cameras on the HMD and on-board image processing to determine absolute position using only real-world cues). It also opens the door to full-body tracking using similar outside-in sensors.

But overall, the Valve solution definitely lends itself to a Linux implementation better than Oculus's, simply due to the lower I/O requirements. It also helps that Valve has published the Lighthouse calculations (which is just basic math), while Oculus has kept its image recognition algorithms as trade secrets.

[1] https://arstechnica.com/gaming/2017/10/wireless-oculus-vr-gr...


The certs are for drivers, not an internet connection.

Internet connection is required for updates, for instance, in case you forgot to countersign your drivers against a timeserver.

Whoops.


Whoops indeed. :-D

About a $5m whoops, considering Oculus just gave everyone a $15 store credit due to the problem.

Sometimes education is expensive.


A surgeon doesn't care about any of this.


Arguing that medical devices don't fail is specious. The procedure for reporting errors that lead to deaths can be found here: https://www.fda.gov/MedicalDevices/Safety/ReportaProblem/def...

> "Each year, the FDA receives several hundred thousand medical device reports of suspected device-associated deaths, serious injuries and malfunctions."

It is also specious to argue that a consumer product is being used for live surgeries without FDA approval.

This does not excuse the mistake, nor does it change the fact that the error will make people question the reliability of the product - as they should.

However, mistakes do happen, even big ones. Rockets blow up. Airbags have defects that make them not work. McAfee pushed out an antivirus update that deleted a Windows system file, crashing hundreds of thousands of PCs.

The important questions are: how does the vendor respond, what procedures do they put into place to prevent it from happening again, and are those procedures enough to give future buyers confidence that the issues are addressed?

Saying "that shouldn't have happened," while perhaps true, is simply not constructive.


As a medical device, I would expect that this possibility would have been caught very early on in one of any number of Failure Analysis meetings and mitigated by the time the device made it to the (FDA) certification process.


I’m going to assume you haven’t used many bits of medical equipment, because doing it for a job leads me to conclude that the software is more flakey than standard commercial software used day to day. Low sales volumes do not make for budgets high enough to support good debugging and development I guess.


I haven't worked in a medical lab (where our instruments were generally used). But I was a software developer for various medical devices for over 15 years and my conclusion was exactly the opposite: the software was far, far more robust than most commercial software.


It wasn’t radiology then (which would be the rough limit of my knowledge). PACs, MRI, CT, RIS, Angio gear, image intensifiers etc. All used over many years with weird glitches, reproducible errors including complete system crashes that take hours to get back and across several vendors.


Our product is a training aid for medical professionals and is not regulated as a medical device, in the same way that a flight simulator is not regulated as an aircraft.


Oh, I got that. I was replying to the person who was wondering what would happen if it was being used for actual surgery.


You'd pay lots of money to get immediate high-level support ... investors will love it!


Then you fall back to normal surgery.


Give the engineer the day off, that's classic side think :)


The engineer went on to figuring out that if he set management's clocks back a few days he could take them off, since management clearly remembered him being on premises for those days.


Simple and elegant.


A day off? That's all? Give that man a raise! Something to look back upon each month. He might have saved the company and even if not, probably a lot of money anyway... it's only fair to give something back.


A day off is nice, but doesn't mean that much in the scheme of things.

I would like to see more companies write a Thank You letter from the CEO, signed by his managers. Something that he could use during his performance evaluations at the company, or attach to his resume for any other jobs.

It's hard to get concrete evidence like that, which shows your value to the company. It would great to have documentation that could never be forgotten.


While the letter indeed would be nice, I'd much prefer the raise if it was me.


Or a bonus...


Well every comment thread on the Internet related to the Rift issue mentioned this as a solution so it probably wasn't his idea.


Yeah, but if he's the guy who Googled it, he should get the day off anyway if it wasn't really in his realm of responsibility.

It seems ridiculous in this modern age, but there are a huge number of people who will never bother to look into their problems on their own before asking someone else. Then this other person does a simple Google search and becomes the hero expert.

This all too often results in further dependence, with no real reward for the guy who took this basic step except more requests in the future. If this one guy can get a day off in this instance, it'll be a victory for every person who has ever said "Oh, if you google that, you'll see one of the first results with instructions to do x, y, z." to a time-draining coworker.


Googling for computer problem solutions (or just generally) is a surprisingly nuanced skill. Sometimes one person finds in minutes what another fails to find in days, only because of slightly better search terms and faster (or more accurate) assessment of hit teasers.

Also, a lot of problems have their search engine results "poisoned" by solutions for lesser, but superficially similar problems that are worked to death by SEO content farms competing for attention.


Anyone who had an interest in computers in the mid 90-s to early 2000-s will remember trial software that was good for 30 days but which, if you set the clock back would give you the corresponding amount of extra time.

I even seem to recall one that when I set the clock back much more than 30 days gave me as many more days beyond the 30 as as much as I had set it back.

Then there were a couple pieces of software that would detect such trickery and which would punish you by taking away the time you had left also if you set back the date before the 30 days were up.

Anyway, with this in mind, the first thing I thought when I read the headlines was, “I wonder if one can get around this by setting the clock back”, and I doubt I was alone in that, so to say that it “probably wasn’t his idea”... I dunno man.


With Macromedia Flash you had to write down the time when you stopped using it, then set it to a minute after it before running it again - because it remembered the last time it was running and refused to start if the new time was lower. Fun times. I could never have afforded Flash back then.


I've done it for the games free for a week-end on Steam, when I want to continue playing but I don't want to pay for the game. Of course it only work for single-player games. But changing the date of the computer quickly makes browsing internet unusable, due to certificate check failing.


Imagine how terrible it would be for your customers once that happened in the production....


He said it's a surgical training startup so I think it would have been fairly okay.


Those things aren't cheap for simulators, either - not to mention knock-on costs. "What do you mean - I got the doctors in, which alone took a month of herding cats, and now it won't work, just because?"

How low has the SW development bar gone, if "it's okay" now means "at least it's not directly killing people"?


> How low has the SW development bar gone, if "it's okay" now means "at least it's not directly killing people"?

I though that was the way ever since OS/2 failed. Getting stuff out to customers has priority over quality control.


Disrupt, innovate, first-mover advantage, growth marketing, yadaa-yadaa.


There has always been a tradeoff between reliability and development time. There wouldn't be a games industry if every video game had the same level of software assurance as a mars lander, because Tetris would cost $200m to develop. A medical simulator lies somewhere between a mars lander and a video game - it needs to provide accurate simulation, but the odd crash isn't a complete dealbreaker.

The bug is now patched, so the downtime appears to be less than 24 hours from discovery to fix. The original error is clearly a major blunder, but Oculus have responded properly.


>How low has the SW development bar gone, if "it's okay" now means "at least it's not directly killing people"?

It gets that low every time a hospital underfunds IT staff and makes horrible project management decisions and product buying decisions.

I've seen that first hand. There's a bunch of corpses at the IT entrance of people who've tried to turn that around.


Hospital...corpses...I have a hard time distinguishing the literal and figurative context here.


The GP was suggesting that this could kill people. I simply implied that it wouldn't, and compared to killing people, I would say a lost day is "okay".


I'll try that for my next programming blunder: "Sure, I've set back hundreds of people one day, but hey, didn't kill them! No big deal, they should even be grateful!"

In other words, comparing to the worst possible outcome is, by definition, not a very high bar.


Yes, but it's an interesting question to ponder. The last decade or two government and Military procurement has been leaning more heavily towards COTS (commercial off-the-shelf ) hardware and software. The thought was it's cheaper and possibly more reliable than the bespoke solutions that vendors were delivering in the past. Now we see that it isn't a guarantee of anything. Though still probably cheaper, at least initially.

Something like this probably will happen with computer assisted surgery or medical procedures, or an aircraft in flight. Just a matter of time.


Wow! I'm glad you were able to get it figured out. Sounds like a nightmare scenario.

By the way, do you have any links to your surgical training startup? I'm doing some research into VR/AR for surgical telementoring and training and would be interested in seeing how it's in use.


Osso VR - http://ossovr.com/

We've had a fair amount of press coverage in specialist press recently so a search for Osso VR should turn up some recent articles too.


"I was a former game developer turned orthopaedic surgeon." [1]

Wow. That guy sounds interesting...

[1] https://www.youtube.com/watch?v=bqra7wslwCM


Why are you basing medical appliance on such a walled-garden technology you aren't in control of, while there are more accessible alternatives? Oculus was already known for locking up fiascos, this really shouldn't be a surprise for you.


Our product is a training aid for medical professionals not a medical appliance. We're not fundamentally tied to any particular VR device but Oculus has been our primary platform due to better ergonomics and an easier setup experience for a portable demo rig than the Vive.


This is fairly common practice in the medical device field. Volumes for specialized equipment are far too low to justify the NRE on custom solutions, so many low-volume medical devices integrate COTS solutions wherever possible.

It doesn't help that vendors are generally nervous about liability in medical equipment (this fear is often unfounded, but persistent). As a result, vendors of commercial and industrial equipment generally don't want to engage medical device OEMs with engineering and customization support. If there had been that sort of support in this case, Oculus might have made a custom build without the cert check, just as a de-risking measure.

This vendor reluctance is especially present at the FDA Class III (high risk device) level - most vendors outright prohibit use of their devices in these products. It's an open secret that this still happens anyway in a wink-wink nudge-nudge fashion, just without vendor support - which is arguably worse, but it keeps the lawyers happy.


I think you underestimate how many companies build technologies off of walled gardens - especially in Healthcare.


Although you are correct, I don't see how that really matters. Just because a lot of people do it doesn't make it a good idea.


What's the alternative they should have been using?


Anything based on OpenVR or OSVR? It's not like Oculus is a monopolist in this space, it's just one of the popular options and it's known to be the most locked up.


Wouldn't be the first time in the industry: https://twitter.com/JanHenrikH/status/910754596422868992/pho...


I don't think you understand what a medical appliance is.


>one of our engineers figured out we could get our demo rigs working by setting the clock back a few days

The real MVP of this story. Sometimes a dirty hack is good enough.


That engineer earned himself a nice bonus. Hope he'll get it


Welcome to the future.


Hardly original, kids have been using this to extend the trial periods of shareware since forever. Most Rift users have been using RunAsDate which hooks the kernel's time APIs.


Yup, hardly original to think of a non-obvious security work around the morning of a conference that could potentially make or break the company's future success, while dealing with what must have been insane pressure from the everyone there to figure it out.


It's extremely obvious to anyone who's ever used shareware and it was also posted on reddit within like 5 minutes. Kudos for not using google I guess.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: