Hacker News new | past | comments | ask | show | jobs | submit login
Hotel-room hacks: Picking the lock (economist.com)
96 points by ljensen on Sept 13, 2012 | hide | past | favorite | 69 comments



> The hacker did not explain the flaw to the company in advance of revealing it to the public, a decision he told Forbes was because he saw "no path to mitigate this from Onity's side." To fix the problem, the locks' entire circuitboard has to be replaced—and on millions of locks, that's a process that could take a long time.

That seems like rather an asshole move on his part. I understand the argument for disclosing security flaws to force a reluctant vendor to deal with them, but in this case he didn't even give them a chance.


I've covered this a number of times. Simply put, I felt that the best route for hotel owners and customers (who I care about, unlike J. Random Vendor) was to make them aware of the vulnerability and make them aware that they've had a horribly insecure product on their doors for nearly 20 years. Given how ridiculously simple the vulnerabilities are, I'd put money on many others having discovered them in the past, almost definitely using them for malicious purposes. In addition, there's absolutely no way that Onity did not know about this themselves -- it would not have required digging, but been immediately obvious from the design of the system.

The route I took may not have been pretty, but it will get the issue fixed in a timely fashion, I believe, and hopefully alert people to the fact that we need real security processes in place around such things; not having your equipment audited in the case of a security product is simply not acceptable. Not now, and not in 1993.


I appreciate your appearance here. One of the wonderful things about HN is that we often get the facts from the first party source.

I also agree about disclosure - it might have been nice to drop them a note beforehand, but what could they honestly do about it? Nothing more than they are already doing.


Well for one thing less people would know about the flaw and potentially be able to take advantage of it.


That's called "security through obscurity," which isn't really security at all. It didn't prevent daeken from discovering the vulnerability, which means it's likely others with more malicious intent also know about it and are keeping the fact quiet. When the problem goes unpublished, unsuspecting customers will continue to trust the locks on their rooms. When published, customers can make more informed decisions about where to stay.


> That's called "security through obscurity," which isn't really security at all

That phrase is misused and misunderstood on the net almost as much as Benjamin Franklin's statement on freedom and security.

It means that you should not rely on obscurity to keep a system safe. In the long run, you have to assume the bad guys will find out all your bugs. If someone were suggesting that the flaw not be disclosed and that it would be OK for the vendor to not fix it since it is not disclosed, that would be attempting "security through obscurity" and would be bad.

We aren't talking about the long run here. The relevant question here is if in the time it takes to deploy a fix, will more people be harmed if the flaw is widely known than if the flaw is kept quiet while the fix is being deployed.

To answer this question, you need to consider several factors, including (1) what steps customers who learn of the flaw can and will actually do to mitigate its effect on them, (2) whether customers will actually learn about it, (3) how many bad guys who would not have discovered it on their own will exploit it after it is disclosed.


Your reply contains exactly the kind of nuance I should have used above. I'm constantly complaining (mostly on facebook) about how arguments devolve into one-dimensional two-sided affairs, and here I fell into the same trap. Thank you for providing that clarification.


No, it didn't prevent daeken and probably others from discovering it, but we can be sure it did prevent a lot of less competent people from exploiting it. "Security through obscurity" is still better than a big door with a sign saying "hack me!"


Kinda. In some ways, it's worse because it conveys a false sense of security to people who trust those measures to protect them, too.


Just out of curiosity, how much more secure would a lock be if the hotel asked the patron to have a smartphone app instaled and to enter a personal PIN upon checkin.

When you get to your room, swipe your NFC enabled phone over the lock, then it asks for your pin (on the phone) to unlock the door.

The activity would need to allow you to swipe the NFC over the lock, which will auto-launch the app and prompt for pin to be a smooth user experience.

If you have to find the app and launch it and maybe make another click, to get to the PIN prompt, it would be too cumbersome to users to be a good experience.


Er, any system that requires customers to have a particular brand of smartphone, with NFC capability, is never going to happen. Even if it were an option, basing security on this would mean that the 75% of their customers without the required model of smartphone would then be more likely to have their room broken into. Hotels may be kind of clueless, but they're not stupid enough to reject or screw over 75% of their potential customers...

[This is why the frequent suggestions you hear "oh, why can't we replace <system X> with a smartphone app!" are so stupid. Optional smartphone support for convenience features = nice plus; mandatory smartphone = idiotic.]


It really depends on the implementation. There are a lot of ways this can all be done securely (from magstripes -- ignoring the ease of copying -- to chip cards or NFC with smartphones or ...), but at the end of the day, it comes down to the implementation. With some slightly different choices, Onity's system could've been rock solid, but they dropped the ball.


can you expand on what would have made it rock solid?


Well, from what I know of its failures:

- Use an industry-standard (for the time) crypto algorithm for cards, and use the biggest key size possible. As it stands, they use a (horrible) proprietary algorithm and 32-bit keys.

- Make the lock know which door it's actually for and encode a list of acceptable lists along with the code key values on the card. This prevents a card from one door from opening another door. Not a huge security issue, but it happens more often than you'd think.

- Use secure, authenticated protocols for programming the lock. This is really the critical part; unauthenticated, raw memory reads/writes are just not OK.


You were planning to do a Reddit AMA on reversing in General.

Did that ever happen? Have you written anything on that?


I did indeed -- http://www.reddit.com/r/IAmA/comments/yeiac/iama_reverse_eng...

It went better than I could've ever imagined; it was topping the front page for a while! Seriously awesome experience.


And I thought it went very well--it answered all the questions that I was going to ask you via email.

Thanks for doing it.


It doesn't matter how many times you covered it. It was a dick move, plain and simple.

You obviously knew about the flaw at least few days before your interview with Forbes (or whatever it was). Shooting a one-liner email to Onity was a no-brainer. "I found a major flaw in your locks. Contact me for details." They reply - great, they don't - fine, proceed as planned. I come from a reverse engineering background and I'm sorry to say but I have lost all professional respect for you, regardless of how good of a reverser you are.


All I see is someone who cared more about publicity then limiting the risk of hotels/people who might be affected.

You could have informed Onity first and then simply threatened them with full disclosure if they didn't start owning up to the problem themselves. You intentionally didn't do that. And the only good reason I can see for you not doing that is so you can get more publicity. It was a selfish decision on your part.


Don't shoot the messenger. The security hole was there for everyone to independently observe. Not telling the public just meant that the public couldn't take their own countermeasures.

Blaming security researchers for finding holes is a very strange anti-pattern. We should be blaming vendors for shipping insecure products!


It's going to happen. Even with full care and diligence there will still be some products shipped with security flaws. It is not ethical to give the company no heads up, not even anonymously.


For what it's worth, this isn't "some security flaws". The device itself allowed unauthenticated memory reads (as a matter of design -- it uses them), and the card crypto is done using a proprietary algorithm and a 32-bit key. It's not that there are security holes, it's that there are security Grand Canyons.


You're assuming that the security researcher is the first person to discover the issue. That's rarely the case. Keeping quiet just gives users a false sense of security and ensures that they can't mitigate the security impact on their own (without the help of the vendor).

Knowledge is power. We shouldn't censor ourselves because someone somewhere can be evil with some information. They have other ways of getting the information anyway.


The following is meant to be general, rather than about this particular case.

Assuming the goal is to minimize harm, then when to disclose depends on an interplay of several factors. Here are some of them:

1. How many people will discover and exploit the flaw on their own if it is not publicly disclosed.

2. How many people will exploit it if they find out about it, but will not discover it on their own.

3. How fast knowledge of the flaw will spread to the people of #2 without public disclosure. E.g., through word of mouth in hacker or researcher circles.

4. How many users of the flawed system will be able to use knowledge of the flaw in order to protect themselves from the people of #1 and #2.

5. How long the flaw will remain available.

6. How lessons from this flaw will teach others to build more secure systems.

Disclosure affects #2 (disclosure increases harm), #4 (disclosure decreases harm), sometimes #5 (disclosure might push a vendor to action), and #6 (disclosure decreases harm).


What purpose does this serve? In general, I agree with you, because most vulnerabilities can be fixed by the vendor in some reasonable (<6 months) amount of time, and by telling the vendor about the vulnerability beforehand, you help reduce the window where the attack can be easily exploited.

This is not such a case; the vendor had no reasonable way of fixing this. Others had probably already discovered (and used) this vulnerability, and in the long term fixing this vulnerability quickly requires motivating the company to do so. Disclosing it privately wouldn't have held much benefit, and might have been detrimental (the company may have tried to use legal means to prevent or penalize the public disclosure).


Right, and mitigation doesn't necessarily have to take the form of fixing the electronics.


I once worked with Cody (daeken) when he was reverse engineering the Emotiv EPOC headset, he's definitely top notch. It turns out he does other things: http://demoseen.com/portfolio/


Are hotel room locks really that big a target? If you're in the room, set the deadbolt. When you leave, take your valuables.

The easiest way into a hotel room is social engineering via the housekeeping staff.


The deadbolt doesn't do anything with this, for what it's worth. The deadbolt on Onity locks is software-controlled; that is, there's a privacy switch that's triggered when you throw the deadbolt, and it checks the value of that when you put in a card. If you use a card with the 'privacy override' flag, or you use the Portable Programmer (or my opening device), the lock opens regardless of whether or not you use the deadbolt, as it's disengaged by the lock mechanism when you turn the handle.


I think he is referring to a manually operated dead bolt or those latches at the top of the door. The locks that can only be set and unset from inside of the room.


Latches will work, but 99.9% of doors with Onity locks will only have the deadbolt inside the Onity lock, which is vulnerable to the problem I detailed above. Just something to keep in mind.


No no you're missing the point I think. Nearly every hotel room has a big old manual separate bolt set up higher and away from key based locking system. Slides open maybe 2 inches etc. Twice in my life the hotel person has given my room to someone else by mistake (I travel a lot for work). That is, I'll be in there, twice late at night, and someone else puts in a key and it works. After the first time I always set that manual bolt no matter what - just in case. Not that I think there's any real merit to the original point that kicked off this particular thread.


I think you're missing the point - Onity locks don't usually have this "big old manual separate bolt" as they're sold as "deadbolt inclusive".


It's pretty rare to not see them (can't remember when i haven't) due to the reason I list.



Does anyone actually assume hotels are secure (in a sense other than good faith) though? There are so many points for allocation errors on the cards themselves, dozens of "all-access" cards available to both internal and contracted staff, and different people in every room most nights (e.g. no neighbour familiarity about an issue).


Few people assume hotels are truly secure, but generally when something goes wrong, you have an accurate audit trail. That is, if someone breaks into room 413 using a legitimate card, I can go back and say "ok, Michael was the one who encoded this card for him, after the guest had already checked in". You can't do that with the opening device.


Most hotels have the a) long skinny security latches (i.e. http://www.shutterstock.com/pic.mhtml?id=16874026) - those are specifically used so they can be opened by hotel staff using a special tool (http://www.nathosp.com/product/sl2/hotel_room_door_locks_and...) the deadbolt is typically controlled by the electronic lock and can be overridden.


> When you leave, take your valuables.

So I'm in a foreign land, and I should carry around all my worldly possessions with me?

How is that safer?

(Why can't the hotel provide a lock that works?)


Leave them in the hotel safe (many will give you one in your room) if you prefer. It just doesn't seem worth the hotel putting serious effort into the lock on the room door, because that's always going to be insecure, if only because the minimum-wage cleaning staff need access to all the rooms.


What's the point of mentioning minimum-wage here? Would your logic change if the staff weren't paid minimum wage? If you are staying at a hotel were the staff was paid above minimum wage, would you feel more secure?

The BLS says that the median wage for "Maids and Housekeeping Cleaners" is $9.32. Federal minimum wage is $7.25 per hour. Obviously then, most are not paid minimum wage.

In any case, people also want a long-term job. An aspect of keycard entry is that you have a record of what people entered the room. If only one person entered when something was stolen, then that person is a definite suspect, and may be fired. But if it's possible to circumvent that security, then it's also possible to frame others.


>If you are staying at a hotel were the staff was paid above minimum wage, would you feel more secure?

Yes. Seriously, is that even a question? Wouldn't you?

Higher wages mean two things: the staff have more to lose by being fired, and by implication the hotel puts more effort into its staff. Which means they're probably recruiting more carefully and putting more effort into staff loyalty once they're there.


Then my point is that most hotel cleaning staff is paid more than minimum wage, so the scenario of the minimum wage staff is the uncommon scenario.

I don't think wages affect honesty to any great extent, no. I think bad working conditions affect honesty a lot more.

If you believe this to be true, do you ask the hotel how much they pay their cleaning staff, and choose the one with higher base pay? How much more are you willing to pay to be in a hotel which pays their employees a higher wage?

Higher wages mean other things than those two. It could mean that it's harder to get staff because there is better employment elsewhere, so there's less risk to being fired for suspicion of theft because it's not hard to find a new job. It could be because the union is strong and able to negotiate better than management, while management actively wants to break the union by treating their cleaning staff poorly in the hopes that the staff will steal, so management has reason to fire them and blame the union for protecting thieves.

(Yes, the latter sound much less likely than the former.)


>I don't think wages affect honesty to any great extent, no. I think bad working conditions affect honesty a lot more.

Sure, but the two are closely correlated.

>do you ask the hotel how much they pay their cleaning staff, and choose the one with higher base pay? How much more are you willing to pay to be in a hotel which pays their employees a higher wage?

I'd be surprised if they handed that information out, and it's not worth a great deal of research. But if I do happen to know then it changes how much I'm willing to pay for a given hotel, yes. I haven't calculated every facet of my internal hotel-pricing model (and it's almost certainly nonoptimal in some way - just not worth the effort to optimize), but I've certainly been known to pay more for a hotel I had a better impression of that, and on the (IIRC unique) occasion when I happened to know what the cleaners at one were paid I'm pretty sure that was one of the factors.


If people started asking, and making decisions based on that knowledge, then perhaps salaries and working conditions would improve for the cleaners. But you are right, people (including myself) use other proxies instead.


People have posted many videos/blog posts about how easy it is to open a hotel safe if the hotel fails to reset the factory code: http://www.dailydot.com/news/hotel-safe-security-flaw/

Since seeing this video last year, I have yet to find a hotel safe the I can open with all zeros. In the end, I figure that if someone really wants to steal my stuff, they'll eventually find a way.


I can't fit all of my valuables in a hotel safe.

If the hotel wants to fit a lock that doesn't prevent easy access to my room, and my stuff gets stolen, they can meet my lawyer.


I think you assume that the room safe is harder to break than the lock. Is this true?


Real engineered solution - without new hardware: If this thing is not reprogrammable, and only has an EPROM - do some real enginerering and calculate the ADDITION of bits to set to disable the exploit. Thats the one I would be working on if I worked for Onity.

alternativly, take a mechanical approach to the problem - if you can live without the connector for servicing the lock.

1) De-solder the connector on the board and cut the traces/pads off the board - it won't stop everyone, but enough that have read of the exploit and try to follow through on it without applying any more critical thinking will be thwarted.

2) epoxy over the connector (they kind of did this with the security screw fix, but not really)

3) leave the connector, but add so much resistance between the connector and uP that you have to use a special interface cable to talk to the uP. no one will be able to tell until they pull the lock apart that its not stock.


Even if you fix the vulnerability present in the lock firmware (which you can't do without replacing the Portable Programmer as well), the encryption on the cards is still completely broken.

I've written at length about how this can be fixed; Onity has not yet responded with an effective solution.

(I'm the original researcher)

Edit: Link to my post is here: http://daeken.com/onitys-plan-to-mitigate-hotel-lock-hack Note that their statement about how they would fix it was pulled after Forbes quoted my post.


Yeah, this is what I found fascinating in your paper(http://demoseen.com/bhpaper.html). I had always wondered how they invalidated the old keys automatically.


Out of curiosity, was that part clear? Writing the section on key rotation and lookaheads took me something like 4 days of editing, and I was never actually happy with it.


FWIW, I just read it and it was crystal clear to me.


If these work like they used to - the connector is used to sync the lock with the key machines at the front desk. It requires a reprogram if the master keys need to change (ie someone is fired), batteries die in the lock, etc. Additionally it provides self test info and obviously if you need to force it open (i.e. Maintenance may send the open command in case of reader malfunction). That's why the mechanical solution doesn't involve physical changes to the lock guts - just the housing.


Not sure why this was downvoted (was in the grey a moment ago). This is dead on.


[D]o some real enginerering and calculate the ADDITION of bits to set to disable the exploit.

That is very unlikely to be possible. The ROM may be a masked ROM, in which case it is not re-programmable at all. Quite likely it is a one-time programmable (OTP) ROM. For a OTP, at best you can flip "1" bits to "0", but you cannot change "0" bits to "1". It would take a large amount of luck to be able to patch "1"s to "0"s (and not need to change any "0"s to "1"s) to vector to patched code fixing the vulnerabilities.

In addition, many programmable memories require special programming voltages and they all need the proper control signals - very often the ROM is not in-circuit programmable or is in-circuit programmable only via a test/programming circuit at the factory, not in the field.

WRT #1 and #2, the reason for the connector is to allow the hotel staff to recover from Bad Things like dead batteries and confused/mis-keyed locks. I know I've been the victim of dead batteries more than once... if the only recourse is to destroy the lock to get into the room, the hotel is going to be very unhappy and the guest isn't going to be very pleased either.

#3 is "security through obscurity", which will be effective briefly until the next security researcher figures out how to defeat the change.


Your comments on the most likely one time programmable ROM are exactly what I was thinking but didn't take the time to write with the clarity you did. We are on exactly the same page there. My train of thought was that if it was possible to patch the boards using only software that would be an awfully nice gesture to the hotel managers to send a technician out for a day and just fix everything at once [although, if you have to rip the lock apart to pull the board to reprogram it, you might as well just drop a new boad in]. My background is in board level manufacturing so the whole special test fixtures to interface and board probing was the easy part to me - solving the problem of hacking a jump instruction to jump to unused memory and then patching the problem there, then jumping back seemed like a really elegant and fun problem to work on.

I didn't look at the exploit in detail, but as daeken reminded us there are problems with more than just the program in the door lock - so even if a patch to the ROM chip could fix the problem, it probably doesn't fix all the problem so it isn't a real solution.

You are right, everything else 1-3 are really just obscurity solutions and not real solutions - thanks for calling me out on that.


Hotels can't even get their internet right. Shit is outsourced to some service company who can't fix on-site problems with their routers, and you just get a shrug of the shoulders from hotel maintenance personnel. How in the unholy fuck do you think a Ramada Inn is going to roll out hundreds of modded door locks?


They don't mod the locks, they call up Onity and say "send us 600 new locks that are not flawed." Onity replies "Sorry, no." Then they fight in court for the next 10 years over whether Onity owes the hotel replacement locks.

Assuming the hotel wins, Onity sends a team in to replace the locks (a relatively simple and already solved problem - that is how the original locks were installed). Then Onity sends the hotel the bill for the replacement service and the hotel says "Sorry, no." Then everybody fights in court over the retrofit bill for another 10 years.


How in the unholy fuck do you think a Ramada Inn is going to roll out hundreds of modded door locks?

'Internet' isn't their business - providing rooms, is.

But, really, this isn't a problem. The available maintenance staff takes care of it, or they have a local locksmith team spend a week at it.



What can people do about it? barricade their hotel door? No, this is much more of help to people looking to get to other people and now they just got an extra option. This really opened a market.

If you really care about hotel customers you would be on the company side that made all these locks, because they really need help. Yes they screwed up, they deserve punishment but do the customers have to be the victim?


Nothing is secure. I don't see how the electronic lock is any less secure than the glass used on most house windows. It's like saying glass manufacturers aren't making glass secure enough to protect home owners from intruders when someone can throw a piece of brick and smash it.


Crappy comparison. Glass is installed for aesthetics, not security. Locks are installed specifically to keep people out. Windows are a known and accepted security hole that often have additional security measures attached to them.


How about glass in general? Glass in police vehicles, glass in a police station, court house... Glass isn't a purely aesthetic material, it has many uses.


>"I would like to point out that the '$30 microprocessor' in 2012 would have needed a refrigerator size computer 20 years ago when the Onity system was designed. Twenty years from now all of our current 'state of the art' security will be hackable with nothing more powerful than a 2032 edition pocket calculator."

Just read this comment on the site - perhaps a bit exaggerated, but I think a valid point nonetheless. Of course, Onity should have done something about the flaw.


Hah, I didn't see that comment on the story. It's funny, but it's completely untrue. The chip may have cost you $5 (rather than the $0.05 it costs now), but a PIC from 1993 -- when Onity released the HT locks, and they actually used for the locks themselves -- would've opened them just as well as a modern PIC/AVR/Propeller.

If someone didn't know about and exploit this flaw in 1998 (5 years later), I'd be downright flabbergasted. It's just way, way, way too simple.



Mind changing it over to http://daeken.com/blackhat-paper please? I keep failing to keep the git repo up to date (been way too busy).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: