Hacker News new | past | comments | ask | show | jobs | submit login

> The hacker did not explain the flaw to the company in advance of revealing it to the public, a decision he told Forbes was because he saw "no path to mitigate this from Onity's side." To fix the problem, the locks' entire circuitboard has to be replaced—and on millions of locks, that's a process that could take a long time.

That seems like rather an asshole move on his part. I understand the argument for disclosing security flaws to force a reluctant vendor to deal with them, but in this case he didn't even give them a chance.




I've covered this a number of times. Simply put, I felt that the best route for hotel owners and customers (who I care about, unlike J. Random Vendor) was to make them aware of the vulnerability and make them aware that they've had a horribly insecure product on their doors for nearly 20 years. Given how ridiculously simple the vulnerabilities are, I'd put money on many others having discovered them in the past, almost definitely using them for malicious purposes. In addition, there's absolutely no way that Onity did not know about this themselves -- it would not have required digging, but been immediately obvious from the design of the system.

The route I took may not have been pretty, but it will get the issue fixed in a timely fashion, I believe, and hopefully alert people to the fact that we need real security processes in place around such things; not having your equipment audited in the case of a security product is simply not acceptable. Not now, and not in 1993.


I appreciate your appearance here. One of the wonderful things about HN is that we often get the facts from the first party source.

I also agree about disclosure - it might have been nice to drop them a note beforehand, but what could they honestly do about it? Nothing more than they are already doing.


Well for one thing less people would know about the flaw and potentially be able to take advantage of it.


That's called "security through obscurity," which isn't really security at all. It didn't prevent daeken from discovering the vulnerability, which means it's likely others with more malicious intent also know about it and are keeping the fact quiet. When the problem goes unpublished, unsuspecting customers will continue to trust the locks on their rooms. When published, customers can make more informed decisions about where to stay.


> That's called "security through obscurity," which isn't really security at all

That phrase is misused and misunderstood on the net almost as much as Benjamin Franklin's statement on freedom and security.

It means that you should not rely on obscurity to keep a system safe. In the long run, you have to assume the bad guys will find out all your bugs. If someone were suggesting that the flaw not be disclosed and that it would be OK for the vendor to not fix it since it is not disclosed, that would be attempting "security through obscurity" and would be bad.

We aren't talking about the long run here. The relevant question here is if in the time it takes to deploy a fix, will more people be harmed if the flaw is widely known than if the flaw is kept quiet while the fix is being deployed.

To answer this question, you need to consider several factors, including (1) what steps customers who learn of the flaw can and will actually do to mitigate its effect on them, (2) whether customers will actually learn about it, (3) how many bad guys who would not have discovered it on their own will exploit it after it is disclosed.


Your reply contains exactly the kind of nuance I should have used above. I'm constantly complaining (mostly on facebook) about how arguments devolve into one-dimensional two-sided affairs, and here I fell into the same trap. Thank you for providing that clarification.


No, it didn't prevent daeken and probably others from discovering it, but we can be sure it did prevent a lot of less competent people from exploiting it. "Security through obscurity" is still better than a big door with a sign saying "hack me!"


Kinda. In some ways, it's worse because it conveys a false sense of security to people who trust those measures to protect them, too.


Just out of curiosity, how much more secure would a lock be if the hotel asked the patron to have a smartphone app instaled and to enter a personal PIN upon checkin.

When you get to your room, swipe your NFC enabled phone over the lock, then it asks for your pin (on the phone) to unlock the door.

The activity would need to allow you to swipe the NFC over the lock, which will auto-launch the app and prompt for pin to be a smooth user experience.

If you have to find the app and launch it and maybe make another click, to get to the PIN prompt, it would be too cumbersome to users to be a good experience.


Er, any system that requires customers to have a particular brand of smartphone, with NFC capability, is never going to happen. Even if it were an option, basing security on this would mean that the 75% of their customers without the required model of smartphone would then be more likely to have their room broken into. Hotels may be kind of clueless, but they're not stupid enough to reject or screw over 75% of their potential customers...

[This is why the frequent suggestions you hear "oh, why can't we replace <system X> with a smartphone app!" are so stupid. Optional smartphone support for convenience features = nice plus; mandatory smartphone = idiotic.]


It really depends on the implementation. There are a lot of ways this can all be done securely (from magstripes -- ignoring the ease of copying -- to chip cards or NFC with smartphones or ...), but at the end of the day, it comes down to the implementation. With some slightly different choices, Onity's system could've been rock solid, but they dropped the ball.


can you expand on what would have made it rock solid?


Well, from what I know of its failures:

- Use an industry-standard (for the time) crypto algorithm for cards, and use the biggest key size possible. As it stands, they use a (horrible) proprietary algorithm and 32-bit keys.

- Make the lock know which door it's actually for and encode a list of acceptable lists along with the code key values on the card. This prevents a card from one door from opening another door. Not a huge security issue, but it happens more often than you'd think.

- Use secure, authenticated protocols for programming the lock. This is really the critical part; unauthenticated, raw memory reads/writes are just not OK.


You were planning to do a Reddit AMA on reversing in General.

Did that ever happen? Have you written anything on that?


I did indeed -- http://www.reddit.com/r/IAmA/comments/yeiac/iama_reverse_eng...

It went better than I could've ever imagined; it was topping the front page for a while! Seriously awesome experience.


And I thought it went very well--it answered all the questions that I was going to ask you via email.

Thanks for doing it.


It doesn't matter how many times you covered it. It was a dick move, plain and simple.

You obviously knew about the flaw at least few days before your interview with Forbes (or whatever it was). Shooting a one-liner email to Onity was a no-brainer. "I found a major flaw in your locks. Contact me for details." They reply - great, they don't - fine, proceed as planned. I come from a reverse engineering background and I'm sorry to say but I have lost all professional respect for you, regardless of how good of a reverser you are.


All I see is someone who cared more about publicity then limiting the risk of hotels/people who might be affected.

You could have informed Onity first and then simply threatened them with full disclosure if they didn't start owning up to the problem themselves. You intentionally didn't do that. And the only good reason I can see for you not doing that is so you can get more publicity. It was a selfish decision on your part.


Don't shoot the messenger. The security hole was there for everyone to independently observe. Not telling the public just meant that the public couldn't take their own countermeasures.

Blaming security researchers for finding holes is a very strange anti-pattern. We should be blaming vendors for shipping insecure products!


It's going to happen. Even with full care and diligence there will still be some products shipped with security flaws. It is not ethical to give the company no heads up, not even anonymously.


For what it's worth, this isn't "some security flaws". The device itself allowed unauthenticated memory reads (as a matter of design -- it uses them), and the card crypto is done using a proprietary algorithm and a 32-bit key. It's not that there are security holes, it's that there are security Grand Canyons.


You're assuming that the security researcher is the first person to discover the issue. That's rarely the case. Keeping quiet just gives users a false sense of security and ensures that they can't mitigate the security impact on their own (without the help of the vendor).

Knowledge is power. We shouldn't censor ourselves because someone somewhere can be evil with some information. They have other ways of getting the information anyway.


The following is meant to be general, rather than about this particular case.

Assuming the goal is to minimize harm, then when to disclose depends on an interplay of several factors. Here are some of them:

1. How many people will discover and exploit the flaw on their own if it is not publicly disclosed.

2. How many people will exploit it if they find out about it, but will not discover it on their own.

3. How fast knowledge of the flaw will spread to the people of #2 without public disclosure. E.g., through word of mouth in hacker or researcher circles.

4. How many users of the flawed system will be able to use knowledge of the flaw in order to protect themselves from the people of #1 and #2.

5. How long the flaw will remain available.

6. How lessons from this flaw will teach others to build more secure systems.

Disclosure affects #2 (disclosure increases harm), #4 (disclosure decreases harm), sometimes #5 (disclosure might push a vendor to action), and #6 (disclosure decreases harm).


What purpose does this serve? In general, I agree with you, because most vulnerabilities can be fixed by the vendor in some reasonable (<6 months) amount of time, and by telling the vendor about the vulnerability beforehand, you help reduce the window where the attack can be easily exploited.

This is not such a case; the vendor had no reasonable way of fixing this. Others had probably already discovered (and used) this vulnerability, and in the long term fixing this vulnerability quickly requires motivating the company to do so. Disclosing it privately wouldn't have held much benefit, and might have been detrimental (the company may have tried to use legal means to prevent or penalize the public disclosure).


Right, and mitigation doesn't necessarily have to take the form of fixing the electronics.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: