Hacker News new | past | comments | ask | show | jobs | submit login
Key Reinstallation Attacks – Breaking WPA2 by Forcing Nonce Reuse (krackattacks.com)
1371 points by fanfantm on Oct 16, 2017 | hide | past | favorite | 411 comments



Matthew Green's blog on why it happened and how it escaped detection is a really good read. https://blog.cryptographyengineering.com/2017/10/16/falling-...


> Representation of the 4-way handshake from the paper by He et al. Yes, I know you’re like “what?“. But that’s why people who do formal verification of protocols don’t have many friends.

+1 for a good read, really enjoyed his writing style.

For those unfamiliar, Matthew Green is a cryptography researcher and professor at Johns Hopkins.

Edit: TIL John's' Hopkins ty /u/dEnigma


It's actually "Johns Hopkins". The story behind the name is somewhat interesting:

http://www.hopkinsmedicine.org/about/history/history1.html


"One of the problems with IEEE is that the standards are highly complex and get made via a closed-door process of private meetings. More importantly, even after the fact, they’re hard for ordinary security researchers to access."

While I'm sure this can't take much of the blame, it sure strikes a chord. The IEEE standards process seems insanely archaic and broken in the open-source era.


It's been years since I was involved with the organization side of the Standards Association, but there was a lot of frustration among staff because the vendors (and stakeholders in general) often had a vested interest in keeping the process broken. IEEE as a whole had a weird relationship with Standards as well. A somewhat related example is that it took staff years to get permission from all the vendors to release the full MAC address allocation database after agreeing to keep it non-public. In general you can probably assume that people who work for Standards are even grumpier about the whole nightmare than people outside the process. It's sort of a perverse form of regulatory capture where the "agency" is still trying to do the right thing, but they're locked in by their constituency.


It is, arguably, pretty broken even in some closed-source arenas. For instance if your objective is to have the IEEE first define thorough, carefully reviewed standards which are then closely and widely implemented throughout an entire industry, 25G and 50G Ethernet were abject failures.


As an Android user is there any mitigation for this other than ditching my handset and switching to an iPhone or waiting (hopelessly) for a patch from my vendor.

This really does highlight the absolute disaster zone that the Android handset market has become as far as updates are concerned. I'm sure the Pixels will get a fix relatively quickly but almost every other Android user is going to be left in security limbo.


This is one of those things that should be better with modern handsets and the security patch level for Android. Hopefully a fix for this is included in the November set.

In general most bigger manufacturers have been somewhat decent in updating their flagship devices. With a Sony flagship from the last 18 months for example, you usually won't run more than two months behind on security updates. Samsung is similar if I remember correctly. Hopefully a big exploit like this will be enough of a kick in the butt to get manufacturers releasing security updates faster.


I have a HTC 10, a flagship device that's barely a year old the fact that I now have to wait a couple a months for a patch to what is clearly a critical vulnerability is just ridiculous. The fact that anyone without a flagship device should now throw that phone away because it will probably never be patched is despicable.

I totally agree with your hope that this will kick both the manufacturers and Google in the butt enough to get something done about this. I don't like our chances though!


To underline your point, even my Nexus 5 (_from Google_), which is a little less than 3 years old, will never receive security updates. And one of the main reasons I chose the Nexus was to be sure to get updates on time. Except for the security vulnerabilities, everything of the device is totally fine. It's such a waste of resources… (In this case I at least have an alternative in the form of Lineage OS, which will only cost me time and nerves for the migration.)


The Nexus 5 was released 4 years ago. At the same time the iPhone 5C was released. The 5C also won't receive a patch for this.

There is a problem with handset abandonment, but this is true across all vendors, and it does not underline sequence7's claim that this is solely an Android problem.


To be fair, these two examples are both on the extreme end: The Nexus 5 being one of the longest supported Android phones on the market, and the 5C having a rather short support period, compared to other iPhones like the 5S, which was released along with the 5C and got iOS 11. Security updates for the Nexus 5 also stopped shipping about a year earlier. There's still room for improvement for Apple (5 years of security updates seems reasonable), but there's still a large gap.


Mobile devices also have much less reason to lose support than they used to. In 2010, the difference in hardware between a new phone and a 3-year-old phone was huge - 8 times the RAM, 4 times faster CPU, twice the display resolution, etc. Nowadays, the Nexus 5's specs are on par with midrange phones being released as new. It has all the hardware required to run modern apps. There's no good reason why it can't be supported, besides getting more money from people if they have to buy a new phone every 3 years.


Has anyone ever taken a vendor to court on the basis of consumer rights and not providing security updates and the product therefore not being fit for purpose?

For example, under the Consumer Rights Act 2015 in the UK the product must last for as long as a reasonable person would expect it to, and Apple's interpretation of that is five/six years (see https://www.apple.com/uk/legal/statutory-warranty/).


I'm always tempted by this, I have time, I know people, but the thing is, you agree too far to many contracts by starting to use such a device, that just navigating the initial waters to find a path to sue anyone is a complete non-starter. I'm not able to tilt at this windmill. Hopefully the Purism phone succeeds and I can buy one. Till then ill keep buying a top of the line Apple phone each 4-5 years when support runs out.


If it's anything like Australian Consumer Law, it's really really difficult to sign away those rights. In fact, even attempting to tell a customer that they can sign away those rights can result in company ending fines.


Hmmm, might have to lead with this line of reasoning when I ask Samsung here in Australia why my S6Edge doesn't have an update for this in a reasonable timeframe...


> you agree too far to many contracts by starting to use such a device, that just navigating the initial waters to find a path to sue anyone is a complete non-starter

The Consumer Rights Act 2015 also covers unfair terms in sales contracts (and at least in Scotland EULAs are part of the sales contract, per Beta Computers (Europe) Ltd v Adobe Systems (Europe) Ltd; I don't know the status elsewhere in the UK), and it's quite likely you could just go through most of the possible contractual outs and argue they are unfair terms.


The problem is that these aren't phones anymore; they're small computers. They should have support lifetimes that are comparable to desktop computers.


> they're small computers.

And they are more expensive than many computers you can buy on the market. 2 years support on a device that can cost $500+ isn't acceptable.


And yet people buy them.

We are 4-5 years into the period where people have had sub $300 choices, so there is an alternative to spending $500+ on a device that comes with 2 years of support. Maybe not a fantastic alternative, but the $300-$500 extra that people choose to spend says something about what they care about.


Quite frankly I don't think we should be counting elapsed time from when the device was first released but rather to when the device was last sold as new. I have a Sony Z3 first released September 2014, but which I purchased new over a year later in October 2015. The last security update this phone received is the May 2016 one.


> The 5C also won't receive a patch for this.

You don't know that. Apple could very well release an iOS 10 update for this.


By that token it's possible the GS2 on Android 4.0.4 might get a patch for this.

But in both cases the stated policy is that the devices will not receive any more updates, either feature or security.


A lot of phone companies are locking their devices from receiving new kernels. Its dumb and I hate it.


Last I checked, LineageOS for the Nexus 5 is still vulnerable to CVE-2017-9417 (Broadpwn). LineageOS might work for keeping the userspace up-to-date, but their kernels are still largely dependent on the upstream vendors. If the problem is in a firmware blob, as is the case with Broadpwn, you are pretty much guaranteed to be SOL without vendor support.


You'd expect better from the horse's mouth, but most other companies are worse. I got a Fairphone 2 and my partner a Wileyfox Swift 2X. Both have very good software support thus far. Near monthly updates. Wileyfox phone is a bit new, but they got a good track record. Nokia (HMD) also seems to be back, this time with good software support (thus far). Compare that to Motorola.

Hopefully Android Orea 8 with Project Treble will stop this ridiculous trend for the rest of us. Together with the smartphone market being saturated (budget phones of 200 EUR are very decent these days), we may end up with long term support on older yet still decent devices.


On the bright side, Nexus 5 has a lot of community support. So, as you've stated, there are decent ROM alternatives like LineageOS that are actively maintained to implement these patches.


Fortunately the term "flagship" doesn't promise anything about support, patches or security.

It's just a word that means you paid more for having all the bells and whistles that the OEM could offer at the time, instead of going for the next-best model or such.

Fortunately, whether you can afford the best of the best with all the optional extras or a cheaper second-tier model doesn't affect the security of the device. And it shouldn't, because if you can't afford a "flagship" device, doesn't mean you can afford to get hacked either.

Unfortunately, while the security-update frequency ought to be comparable, it turns out that it's mainly comparably bad :-/


LineageOS has got your back:

https://download.lineageos.org/pme


this is why I am rooting for initiatives like PostmarketOS [0] and similar initiatives in the hopes ot break this mentality that every year we need to buy new hardware in order to stay secure.

[0] https://www.postmarketos.org/


"With a Sony flagship from the last 18 months for example, you usually won't run more than two months behind on security updates."

That's still truly terrible compared to Apple's legacy device support. iOS 11 and future patches still support even the iPhone 5s, a phone from 2013.


According to https://char.gd/blog/2017/wifi-has-been-broken-heres-the-com... it will be in the November 6 Android patch level.


Any https traffic is going to be safe from this attack, a VPN would also protect you.


From TFA:

Although websites or apps may use HTTPS as an additional layer of protection, we warn that this extra protection can (still) be bypassed in a worrying number of situations. For example, HTTPS was previously bypassed in non-browser software, in Apple's iOS and OS X, in Android apps, in Android apps again, in banking apps, and even in VPN apps.


This only applies to apps which screw up validation of TLS certificates. There is an unfortunate amount of them, but certainly does not apply to all apps (and not an issue for websites).

Either way, this disclosed vulnerability only involves link layer man-in-the-middle in order to collect traffic. Active manipulation of traffic (Required for TLS intercept) is more complicated.


Pray for a vendor patch. The fix landed today in the hostap repository:

https://w1.fi/cgit/hostap/commit/?id=a00e946c1c9a1f9cc65c729...


Does this resolve the issue on the AP side of things? Could I theoretically have an AP update that would resolve this with no need to update clients?


Both APs and clients need to be patched[0]. Mitigations are possible on AP side if no updates are available[1].

[0]: "Finally, although an unpatched client can still connect to a patched AP, and vice versa, both the client and AP must be patched to defend against all attacks!"

[1]: "you can try to mitigate attacks against routers and access points by disabling client functionality (which is for example used in repeater modes) and disabling 802.11r (fast roaming)."


Unfortunately no, from what I understand this is primarily an attack against clients.


Ah yes, I see now that the patch is actually to wpa_supplicant.

Well, hopefully this means no kernel patch will be needed.


I know it's not ideal nor user friendly, but you can get a device that is supported by lineage os.

You get updates every week.


Since this is fixed by a patch to wpa_supplicant LineageOS does fix it, but it is pointed out elsewhere in this thread that Lineage is still vulnerable to older hacks like broadpwn on many devices since they have a hard time patching kernel-level vulnerabilities without vendor participation.

Your advice is valid, but it’s important to not have a false sense of security.


I wasn't aware of that, thanks


You should be good if you’re up to date as of November 6th (I think, it may be November 8th) Swiftonsecurity tweeted this out, it’s a description of KRACK and various devices affected by it. Apparently google already fixed it on android? Also it says that iOS is rumored to be protected against this since iOS 11 but it’s not confirmed. Nobody has put out an official statement yet. What’s weird is that commercial vendors like Ubiquiti UniFi (I use them myself) have already released fixes for their APs but the paper says that clients should be the priority and get fixed from KRACK ASAP. And it’s weird because I don’t know of any client-side fix released in the wake of KRACK being public. https://char.gd/blog/2017/wifi-has-been-broken-heres-the-com...


October 6th/8th, you mean?

I found it interesting that, in his article, he said: "With our novel attack technique, it is now trivial to exploit implementations that only accept encrypted retransmissions of message 3 of the 4-way handshake. In particular this means that attacking macOS and OpenBSD is significantly easier than discussed in the paper"

but elsewhere it said recent versions of OS X and iOS are not impacted. I wonder if the "safe" OSes are only vulnerable to the blocking/replay but not the decryption of data?

My UniFi AP-PROs show up today so I'll make sure to update them first thing.

Also, I'm having a bit of a hard time understanding the attack. It sounds like he forces them to connect to his AP, performs the attack, then allows them to connect to the intended network with the zeroed key, THEN is able to sniff that client's traffic because he knows their key? If I understand correctly, this means he cannot sniff the whole network's traffic, only the traffic between the attacked client and the AP? This makes me wonder about the meaning of a pre-shared key, but I'm guessing the PSK is only used to setup the relationship between client and AP, and then after the initial connection/pairing the pre-shared key is no longer used...


> Also, I'm having a bit of a hard time understanding the attack.

He forces them to connect to his own AP and forwards all traffic to the destination so that the client is unaware it has been redirected.

He then forces the client to re-install the key which (on anything that is derived from wpa_supplicant e.g. Linux, Android, etc) the client has blanked out after first use, so the key it reinstalls is now all zero bytes.

He can continue to forward the traffic to the destination so that the client gets responses, but now he can decrypt all of the traffic too.

For clients that re-install the correct key (which the account does not recover in any way) the attacker has to rely on snooping enough encrypted data in order to perform a birthday attack as the key re-installation also resets the frame counters which leads to nonce-reuse which is a problem in ciphers like AES-GCM.



I appreciate this is coming from a UK perspective and that not everyone is this lucky, but I don't remember the last time I used public WiFi on my phone thanks to a general mistrust of it and the fact that 4G (or at least HSPA+) has very good coverage here.


Same here (Devon, UK) — although I do use the WiFi we have on buses here, and occasionally when in cafés.


This is about WiFi "protected" with WPA2: basically treat it as suspiciously as you would any public WiFi.

If you choose not to use public WiFi because you can't "trust" it, then you now need to stop using your private WiFi too (until your systems get appropriate patches).


Don't forget the fact that carriers are now snooping your 4G data and selling it to advertisers. It has been a bad week for privacy.


"is there any mitigation for this other than ditching my handset and switching to an iPhone or waiting (hopelessly) for a patch from my vendor."

Using a VPN is the best way to mitigate this until your device is patched, assuming you trust your VPN provider or run your own VPN.

Edit: Actually, even if you don't trust your VPN provider, you'll be protected against this attack (KRACK), given their client is implemented properly.


> given their client is implemented properly.

Unfortunately this is a big part of trusting your VPN provider. It’s shocking how bad the situation is, especially it seems on those marketed via Android apps. [1]

[1] https://arstechnica.com/information-technology/2017/01/major...


Well the VPN ecosystem has an enormous long tail - the paper you cite tested 283 (!) apps. It's unfortunate but somewhat expected that a significant number, especially the ones that haven't been around for long, would have issues.

I'm sure, given the size of that list, that they tested some of the biggest players on the VPN space. I think it'd be good to know which apps were tested and didn't show any issues, especially in light of Krack and the Android bug on wpa_supplicant.


Disclaimer: Used to work for an OEM

With critical bugs like these, it's certain Google will require recent devices that have enough affected users to be updated ASAP. Expect an update within in a few weeks.


The problem is that "recent" seems to mean <3 years old, and often, <2 years old. But there are still millions of active Galaxy S3s, S4s, S5s, etc.


As others suggested ensure that all communication uses TLS (be it https et al or tunnel traffic through a VPN).

Also you could install a better version of Android on your phone rather than an outdated vendor version. That will probably fix more security related issues than just this one :)


How would you make sure that apps use TLS for comunication? In the browser it's easy to see, but in apps those details are hidden away from the user.


Sniff the apps, uninstall if they don't, it's just plain unacceptable at this point. If there's something you really need that doesn't, set up a VPN.


Use a VPN, a commercial one that actually works, like F-secure Freedome.


I orded a librem5 for this among other reasons. https://puri.sm/shop/librem-5/


Currently the only mitigation is to constrain your browsing to properly configured https (SSL) web sites.


You can (try) to restrict your browsing to HTTPS sites only.

But it's very difficult to ensure that all the communications your device is making (background services, vendor apps...) go through that channel.


If only there were some certification body that ran an App Store with rules against unencrypted traffic...


This issue is two-fold right. You can install plugins that force SSL client side (on the main site and any AJAX calls thereafter) but like you said you have no idea what calls that site is making server side. They could be sending everything you send them over plaintext after the initial TLS secured request. Rough times.


Luckily, the servers past the initial SSL link won’t be using wifi, so at least you won’t be any worse off than before.


Or using a VPN.


It was quite nice to let a wifi router be the VPN client to offload it from all your laptops/phones/etc. and better guarantee "VPN always on".. so much for that.


Too bad DNS doesn't fall under properly protected against local attacks.


Fortunately, SSL/TLS (with HSTS) does not depend on your local DNS resolver being secure.


HSTS only works if you have visited the site before or it is hard coded (see Chrome and Google services for example).

Reality is that DNS remains and will continue to remain a giant hole in TLS.


All major browsers implement HSTS preloading, and getting added is quite simple. A very large percentage of your average internet user's traffic is covered by this.


Preloading is a problem waiting to happen. It works fine when only a small portion of the internet uses it. But when you have 2 GB preload file with a few billion entries things are not going to work so well.


The idea is to make HTTPS the default before that happens. In the meantime, you can fit a lot of domains into bloom filter-like data structures.


Fail-safe is indeed the preferred option here, yet the resulting Denial of Service is still unpleasant.


Install one of the major ROMs like AOSP or LineageOS. Relying on your vendor for software or purchasing a device that forces you to isn't the best idea these days.


> Relying on your vendor for software or purchasing a device that forces you to isn't the best idea these days.

Relying on the efforts of unpaid volunteers doing their best to hack together binary blobs is also not the best idea...

Not all devices are supported by major ROM distributors, nor is the support guaranteed to be endless or current... (even some devices as major as the Galaxy S6 for example)


Only so much depends on those binary blobs though and changes to, for example, wpa_supplicant, happen at a much higher level.


Agreed, but unfortunately if the difficulty of maintaining/backporting/forward-porting binary blobs means that nobody will release ROMs for your device (anymore), your point is pretty much irrelevant. ;-)


This is true, however it doesn't make the point irrelevant.

1. Beyond difficulty of porting blobs, you might well also simply get your updates from a custom ROM faster than you'll get them from the manufacturer, even if it's still supported. That in itself is an advantage.

2. Backporting updates to third party components can be simple (assuming a stable ABI/API); the easiest case is probably that of just dropping binaries from a similar phone that did get updated into a zip file and then flashing it. Look at busybox installers, for example; all you need is a version compiled for your hardware. Java components can sometimes be changed as well (see xposed). This works on desktop systems as well, sometimes: I've been able to 'fix' older games into working just by dropping a newer version of a dll into the game directory (directx, openal, etc))

3. Maybe the company is just stupid. Motorola (or is it Verizon?) has tried Marshmallow for the Moto E2 in Europe, but not in the US. I'd expand on this but I'm on mobile and I'm lazy.


Ubiquity just released a patch for KRACK and soon others will I imagine. From a client perspective, same as always, wait for a patch from your OS vendor. edit: it seems this patch only handles modes where the AP is a client like a bridge or site-to-site. Its still a client-side fix and patching AP's used traditionally won't fix this.

In practice, everything of value should be going over TLS. If you're worried you should be using a VPN on untrusted networks. This attack, if I'm reading it right, doesn't do anything someone on your wlan or lan can't do right now via ARP poisoning and other attacks. So being on that work connection or restaurant wifi is almost the same risk level of this attack.


The thing is, not every protocol offers TLS. Take SMB (network shares) for example... encryption is only offered as of v3 and if a company/university wants to allow Windows 7 clients, they're capped to SMB v2.1.


That's a good example. I guess there's other mitigation at work here. In my case if I'm in a public wifi area and connecting to my work PC then I'm using VPN to access smb. Smb just isn't open to the wifi attacker.

In a case that it is, its curious how you would inject data into a smb stream and not fail checksums from client-side chechking. Maybe its trivial to deal with this, not sure.

If the WPA2 protected wifi network is using AES, which is the most common in my experience, then they won't be able to inject any data. From the Krack website:

If the victim uses either the WPA-TKIP or GCMP encryption protocol, instead of AES-CCMP, the impact is especially catastrophic. Against these encryption protocols, nonce reuse enables an adversary to not only decrypt, but also to forge and inject packets.


True, but what I'm getting at is watching (not injecting/modifying) the username and password fly across the campus airwaves.


Use an always-on VPN?


Always using a VPN should be a mitigation until a patch is released. Should only be a couple weeks out for Pixel devices.


Ditch wifi, go 4G only!


I'm not convinced 4G is more secure.


It does have higher barriers to entry, and penalties for broadcasting unlicensed. Granted, that's both more of an obfuscation than anything else.


Regarding mobile data in the US: https://news.ycombinator.com/item?id=15477286


There are many places indoors where I can get a WiFi signal but no cellular service.


or waiting (hopelessly) for a patch from my vendor

If this is an actual in the wild exploitable issue, there will be patches very quickly for handsets in the support period, as quickly as there is for iOS. This has been the case repeatedly before as well.

What a weird post in general. Maybe wait to complain about this a month down the line or so? Instead it's just effectively noisy rhetoric.


The support period for an iPhone is at least 3 years of regular patches and feature updates. Most Android phones on the market will have a 'support period' of 12 months if you're lucky. My point is that the roll-out of updates to Android is unacceptably poor and inconsistent and relies on optimism on the part of the user.

The use of the word hopelessly was probably unnecessarily dramatic I agree but I'll leave it there so your comment makes sense.


There's a patch for iOS? Wonderful! Oh wait, there isn't.


Did you intend to post that reply to me? Because of course there isn't a patch for iOS yet, and when there is it will leave out hundreds of millions of devices that no longer receive patches. My point was that if one wants to stomp their feet and do the easy "Damn Android" complaint, at least wait until the basis is sound.


Ah, you're right, sorry: that was intended one reply up, where the poster seems to claim that iOS is already patched.


Correct me if I am wrong but basically the attack is against clients, not access points which means simply patching the AP will not do, one would have to patch all of the clients. And the AP patches that are now coming in are probably for client mode, so they fix a certain scenario when the AP is a client which is far from the common one?


Correct for 98%. Clients are the weakest point in the scenario.


Finally a way to get all IoT devices connected to WiFi!

Remember, 'S' in IoT is for Security.


> Remember, 'S' in IoT is for Security.

One of the best quotes I've heard in a while.


Stop panicking (unless you need your daily dose of the End of the World drama).

From the source: In general though, you can try to mitigate attacks against routers and access points by disabling client functionality (which is for example used in repeater modes) and disabling 802.11r (fast roaming).

For ordinary home users, your priority should be updating clients such as laptops and smartphones.

Source: https://www.krackattacks.com/


Some people don't want read all the articles and tend to panic. It's why if there's any security issue we need a list about action to reduce risk, when, how, etc. otherwise people still we dispute about it's feasible or not and then finally the journalist will explain how to fix the risk. I like to read the whole article but then sometimes it's very hard to check if it's truly feasible or it's just a panic mode, for example when wannacry come out the information was a mess.


What's the practical impact here? What do I do with a normal house with routers, laptops, smartphones etc?

Are manufacturers like linksys, d-link issuing patches now or will it be enough to have windows/os x/iOS/android updates enabled? Or do I need both?


Await your client to be patched. Routers are not so much the problem (unless in Range Externder modus).


„submitted for review on 19 May 2017“ ... „OpenBSD was notified of the vulnerability on 15 July 2017“

Can anyone explain the timeline of releasing such significant security findings? Why is it disclosed to the public 1/2 year after submitting to review? I'd guess the (publicly funded) research behind it is a lot older than that.


For a vulnerability of this magnitude, it's not unusual for a responsible disclosure to have a five month review window.

e.g. Dan Kaminsky's discovery of DNS cache poisoning had a 5 month responsible disclosure embargo.


The intent is to have as many fixes available as possible at the time knowledge of the flaw becomes widespread.


Of course.

From my understanding of research at public institutions there is a long period of time and steps between finding something interesting and submitting a paper for review.

Why not disclose the vulnerability first to concerned parties and then write up a fancy research paper? Why the other way round?

Only two explanations I could come up with: Either there must be a very short time frame between identification of the vulnerability and writing of the paper or there was further research needed. Or....I don't know


July => October.

It's 3 month. It's a reasonable delay if you have to alert lots of manufacturer and they need time to roll out critical patches to lots of devices.


Full disclosure is reasonable, and the only truly effective methodology. Anything else just allows vendors to delay or ignore.


It takes time to understand a vulnerability, create a patch and distribute it.

Please stop confusing slowness with an intent to delay or ignore.


Vuln disclosure has historically been associated with vendors delaying and ignoring issues for a long long time. That's the whole reason FD came about. There's no confusion on my part.


> OpenBSD was notified of the vulnerability on 15 July 2017, before CERT/CC was involved in the coordination. Quite quickly, Theo de Raadt replied and critiqued the tentative disclosure deadline: “In the open source world, if a person writes a diff and has to sit on it for a month, that is very discouraging”. Note that I wrote and included a suggested diff for OpenBSD already, and that at the time the tentative disclosure deadline was around the end of August. As a compromise, I allowed them to silently patch the vulnerability. In hindsight this was a bad decision, since others might rediscover the vulnerability by inspecting their silent patch. To avoid this problem in the future, OpenBSD will now receive vulnerability notifications closer to the end of an embargo.

I'm not convinced it was a bad decision. Why would you want to leave your users vulnerable? It's possible that this has been exploited in the wild.


Seems rather prisoner-dilemma-ish[1].

Up until now, there were no indications that this was being exploited publicly. After a flaw like this gets known (whether through a coordinated disclosure or through OpenBSD's early patch) you can be assured people will be exploiting this.

Do you both stay silent and take the minor risk of your users being vulnerable for a short time longer whilst patching and disclosure is being coordinated with all parties (-1/-1), or do you "betray party B" but get your own users secured as soon as possible (-3, 0).

I think coordination makes more sense in a flaw as big as this.

1: https://en.wikipedia.org/wiki/Prisoner%27s_dilemma


And be sure to note the iterated version which is where things get interesting https://en.wikipedia.org/wiki/Prisoner%27s_dilemma#The_itera... You can already see it in this case, where Theo "defecting" leads to less cooperation in future rounds.


For those who missed the FAQ,

> To avoid this problem in the future, OpenBSD will now receive vulnerability notifications closer to the end of an embargo.

i.e., _explicitly signalling_ that this researcher intends to play "defect" with OpenBSD in future rounds, should future rounds occur.


The lack of cooperation already happened. Agreeing to letting him patch and then throwing him under the bus for doing so.


I don't think it is just your users vs. all other users. Why are other vendors not patching as quickly?

In this case, it was not a short time.


This probably hasn't been exploited in the wild. Anyway the point of coordinating disclosure is to leave fewer people vulnerable overall. If one team releases a patch early, attackers can analyze the patch and start using the vulnerability against unpatched systems. Waiting for everyone to patch at once closes that window.


How can you state that it probably hasn't been exploited in the wild with any degree of confidence? It's possible that the same flaw was found and exploited years ago by black hat hackers and/or state security services. We have no way to know whether this actually happened, or even estimate the probability.


Because it hasn't been seen before, it's not likely that it has been exploited. Even after knowing about the flaw for a while, the Wi-Fi Alliance says there is no evidence that this was used maliciously before. https://www.wi-fi.org/news-events/newsroom/wi-fi-alliance-se... We can't know absolutely but with all the attention wifi has gotten since the days of war driving, there's a good chance it would have been caught.


Yeah, without the alliance stating what methods were used to look for attacks, its hard to take that seriously... it's the same line used in just about any security breach.

Is this attack likely to generate log evidence on affected APs in their default configuration, or is it so far down the stack that no evidence is generated and nobody could refute this claim?


Nope, definitely further down the stack. This is part of the protocol that deals with retransmission of lost packets. Nobody logs those.


Is there a way I can install an open source phone OS on my old Android phones to keep them patched? I'm not prepared to keep buying new phones just because manufacturers only provide intermittent updates for a year or two.

Anyone got any suggestions for options?


> I'm not prepared to keep buying new phones just because manufacturers only provide intermittent updates for a year or two.

You could just ... buy an iPhone and get timely security updates for years.

EDIT: Downvote if you want, but if iOS 11 contains this security fix exclusively and not iOS 10, then an iPhone 5s bought on 20 September 2013 is going to get this fix. If Apple release an iOS 10 update and you bought an iPhone 5 on 21 September 2012 you're covered too.


Even for a more modern smartphone. I don't want to lose access to my 32bit apps by migrating to iOS 11. So I hope a patch for iOS 10 will be made available.


Out of interest, what apps are you using that are still 32bit only?


Not the OP, but I just lost access to FlightTrack which was an awesome flight search and status monitoring app that could even pull your itinerary from TripIt.


Tetris. And I use it probably more (in term of time spent) than Facebook, safari and emails together!


and force me to use some propietary-built webkit? Nah, thank you.


Seems like a fair trade for timely security updates?


If you build it yourself, you can use whatever browser you want.


Can one install their own web rendering engine on iOS?


In principle yes (if it is not against the app store guidelines). But if submitted as an app, it cannot use JIT compiling for security reasons. This will make the speed of JavaScript execution very non-competitive to WebKit.


It's not just JIT. Quoting from https://developer.apple.com/app-store/review/guidelines/ section 2.5.2:

  Apps should be self-contained in their bundles, and may
  not read or write data outside the designated container
  area, nor may they download, install, or execute code,
  including other apps.
So your can't ship a JS interpreter either, even without a JIT.

And section 2.5.6:

  Apps that browse the web must use the appropriate WebKit
  framework and WebKit Javascript.
So you just can't have a web browser not using the built-in WebKit, period.

As far as I can tell, you can install a web rendering engine that is not the built-in WebKit, as long as you only use it for HTML/JS that come with your app. At that point the JIT caveat applies.


You can ship a JS interpreter, it just can’t download code from the internet and run it (yes this makes shipping a browser in the App Store impossible).

But regardless, with your own device, you can run whatever code you want on it.


That is why I wrote:

> (if it is not against the app store guidelines)


Having switched to an iPhone recently it does bother me that you can't download iOS updates via 4G. When this gets fixed I need to turn on wifi first (or install iTunes).


LineageOS has a moderately large selection of supported phones for a custom ROM and it has weekly updates. My two and a half year old Moto E has the October 5th security patches for Android.


> My two and a half year old Moto E has the October 5th security patches for Android.

But it has very few kernel security patches: https://cve.lineageos.org/android_kernel_motorola_msm8610


Look through the list yourself, but at least on my device, most of those kernel security issues aren't really of significant impact as apps don't have access to the APIs needed to trigger them and they're not remotely exploitable.


Unfortunately, Google has given app developers a quite powerful tool to disable the use of their apps on non-official OS images, in the form of SafetyNet. So even if you can install an open source version of Android expect a bunch of stuff to no longer work afterwards.


Magisk (/system/less root) currently passes the SafetyNet checks and it, and it's MagiskManager App, are both FL/OSS and hosted on github [0] as well as pre-built images linked from XDA [1].

I'm using it successfully with LineageOS 14.1 (Android 7.1.2).

[0] https://github.com/topjohnwu

[1] https://forum.xda-developers.com/apps/magisk


Which is probably a game of cat and mouse at best.


Not really - ultimately they're root, Google SafetyNet isn't, it has to run at the application level. Meaning Magisk will always win until remote attestation is enforced. There hasn't been a breaking update since July if I recall correctly and the Magisk developer had it patched in about a day.


SafetyNet doesn't actually detect custom ROMs, a stock LineageOS will pass it on most devices at least.

It attempts to detect root or modifications to the ROM by malicious software.

Certain newer devices have secure boot attestation that may cause SafetyNet to fail unless spoofed to be a different device which does not have such attestation.


It also detects unlocked bootloaders, even if nothing is modified. And you need either root or an unlocked bootloader to make proper backups.


No. It only cares about unlocked bootloaders on devices that shipped with Android 7 because a requirement of shipping with that was hardware support facilitating dm_verity, which is essentially a check that the bootloader wasn't tampered with. Without the necessary hardware there's simply no way to perform this check in anything resembling a reliable fashion.

Also, neither root not an unlocked bootloader is required to make "proper backups". Some data actually can't be backed up, and for some data there is no point in making a backup. If the goal is to be able to restore the system to a specific, known state, a bit-for-bit image backup of the entire filesystem is just one way to accomplish the task.


> No. It only cares about unlocked bootloaders on devices that shipped with Android 7 because a requirement of shipping with that was hardware support facilitating dm_verity, which is essentially a check that the bootloader wasn't tampered with. Without the necessary hardware there's simply no way to perform this check in anything resembling a reliable fashion.

So in other words "yes, that is a requirement that will eventually be on all android phones"? Am I misunderstanding something? Older phones being an exception does me little good going forward.

> Also, neither root not an unlocked bootloader is required to make "proper backups". Some data actually can't be backed up, and for some data there is no point in making a backup. If the goal is to be able to restore the system to a specific, known state, a bit-for-bit image backup of the entire filesystem is just one way to accomplish the task.

The last time I tried adb backup and restore, it was a mess. Multiple apps like Skype had no data. And authenticator explicitly opts out of being backed up.

Titanium backup, on the other hand, works perfectly.

Ideally I would just have a rooted phone, but then safetynet complains, and I can't even use Netflix and pokemon. As an alternative I could accept an unrooted but unlocked phone, and root it only when making and/or restoring backups. But having neither is a big hassle.


> So in other words "yes, that is a requirement that will eventually be on all android phones"? Am I misunderstanding something? Older phones being an exception does me little good going forward.

To date it means that it's very possible to bypass any protections put on this though - I believe this may even be possible without spoofing the device in this way, but in any case, Magisk works on any device available today.


Oh sure you can bypass it, but effort is put into purposely breaking that bypass, and it can happen at any moment.

It's really not the same as being free of annoying and unhelpful restrictions.


SafetyNet is not about "official" status, it's about security checks.

I'm actually persuaded that I don't need terminal root access on a device (except for system debugging), but rather a firmware signed with my own release keys, and apps that need privileged access baked in.


I'm using https://lineageos.org/ (previously known as CyanogenMod) on most of my older Devices. I think this is as close to an open source OS as you can get right now.


Depends on the phone. I'm using a ~ 4 year old phone with LineageOS. I also have a Russian phone whose userland source code was never released, and no open source ROM exists; this phone is swimming in vulnerabilities and languishing in Android 6.


Another option is OmniROM: http://www.omnirom.org/

Also fairly actively developed and supports a wide range of devices.


The Wi-Fi Alliance has published an official statement:

'There is no evidence that the vulnerability has been exploited maliciously […].'

That is probably about to change …

https://www.wi-fi.org/news-events/newsroom/wi-fi-alliance-se...


That is very strange for them to say. Unless someone is sitting around collecting full take packet captures of everything going on around them and looking through it all, there would be no way to be aware of this.


Have I got this right in lay-mans terms.

The client is forcibly disconnected from the WiFi network and reconnects to the attackers network instead.

The attacker doesn't need to know the WPA2 password but it accepts the connection setting the encryption to zeros.

The client thinks it is connected to the original wifi network and continues as normal.

Wifi traffic is intercepted and unencrypted.


Not quite: The attacker watches for the initial client->AP encryption negotiation (or forces it by forcing a disassociate), records one step of that negotiation and replays it to the client. That has the side-effect of causing the client->AP traffic to re-use encryption keys. Since WPA2 encryption is a stream cipher, re-using keys opens it up to a known-traffic analysis attack, which allows a listener to decrypt the traffic. So, the user is still connected to their existing AP, but since they're re-using keys, attackers can decrypt the client->AP communication.

There's no need for a second AP in all this, just someone in range of the client who can replay packets to the clients.

(Good TLDR here: https://blog.cryptographyengineering.com/2017/10/16/falling-... )


>There's no need for a second AP in all this, just someone in range of the client who can replay packets to the clients.

How would you drop packet 3 without a new AP?


You don't. You record it and replay it. You want the client to get the same packet 3 over and over.


Are you sure about that? From the paper (section 3.3):

> Note that the adversary cannot replay an old message 3, because its EAPOL replay counter is no longer fresh.

And a related update from the TLDR post you originally referenced (which I believe is causing confusion):

> Update: An early version of this post suggested that the attacker would replay the message. Actually, the paper describes forcing the AP to resend it by blocking it from being received at the client. Thanks to Nikita Borisov for the fix.


> The client is forcibly disconnected from the WiFi network and reconnects to the attackers network instead.

The client is tricked into moving to what it thinks is the same WiFI network running on a different channel, but is actually the attackers network instead.

> The attacker doesn't need to know the WPA2 password but it accepts the connection setting the encryption to zeros.

The attacked doesn't need to know the WPA2 password and (for Android and Linux clients) the client then defaults to an encryption key of all zero bytes.

> The client thinks it is connected to the original wifi network and continues as normal.

Yes.

> Wifi traffic is intercepted and unencrypted.

Wifi traffic is intercepted and can be decrypted (since the encryption key - all zero bytes - is now known).


Just the traffic between the impacted client and the network, right? Because each client is using a different key (has to be, if we're able to reset just one client's key to all zeros)


> This can be abused to steal sensitive information such as credit card numbers, passwords, chat messages, emails, photos, and so on.

... if transmitted over plaintext http


Note that in the demo video they use SSLStrip to cancel attempts of websites to switch to https.

The only protection here is HSTS (which is not enabled by most websites, but major ones like banks will usually have them) and manually typing https:// in your address.



Commonwealth's online banking is actually at my.commbank.com.au and does have HSTS - https://www.ssllabs.com/ssltest/analyze.html?d=www.my.commba...

CUA does have it - https://www.ssllabs.com/ssltest/analyze.html?d=ob.cua.com.au

Bankwest does but has some awful problems elsewhere - https://www.ssllabs.com/ssltest/analyze.html?d=ibs.bankwest....

But yeah, Westpac and NAB don't, and in addition to the ones you tested, ANZ and St. George don't have it either. That's pretty unacceptable really.


These are real issues that you can report to them.


You can (and should) also be using HTTPS Everywhere: https://www.eff.org/https-everywhere.


I think it's a nice plugin until you forget that you're using it and you rage for minutes/hours trying to understand why you can't access some website.

It might be worth it.

I remember spending hours trying to figure out why Google Adsense wouldn't render correctly. In the end I figured out that it was Adblock's fault :))


This only applies to websites which land on a plain HTTP page and then load links to HTTPS pages. Entering “ https:// “ explicitly before any URL accessed will fully mitigate this.


Exactly, this particular sentence seems to overly dramatise the situation.

Is this issue any different to using open wifi at a cafe, which many many people do, relying on HTTPS for their security? (This is an honest question)


At its worst, this attack reduces a protected network to an open wifi, yes.

The risk is that you may do things on a protected network assuming it really is protected - this is more of a thing for organisations rather than consumers, for example organisations might have unprotected services accessible over their office wifi.


Or if combined with some other vulnerabilities...


Somewhat pointless remark as WPA only protects up to the access point. Any vulnerability after that has wider implications regardless of the WPA status.


It's not really pointless. MiTM attacks become much easier to accomplish with something like sslstrip once access to the network is gained.


"Our attack is especially catastrophic against version 2.4 and above of wpa_supplicant, a Wi-Fi client commonly used on Linux. Here, the client will install an all-zero encryption key instead of reinstalling the real key. This vulnerability appears to be caused by a remark in the Wi-Fi standard that suggests to clear the encryption key from memory once it has been installed for the first time."

"Because Android uses wpa_supplicant, Android 6.0 and above also contains this vulnerability. This makes it trivial to intercept and manipulate traffic sent by these Linux and Android devices. Note that currently 41% of Android devices are vulnerable to this exceptionally devastating variant of our attack."


Some resources for those who want to keep updated on vendor patch status:

https://www.reddit.com/r/KRaCK/comments/76pjf8/krack_megathr...

https://github.com/kristate/krackinfo


What's the general approach to the fix? Insist on a new handshake if a client sees a duplicate message #3? Just keep going with the old sequence number?

It seems like vendors need to eventually come to some consensus on how to change the protocol instead of each fixing it in their own way.


The research talks a lot about how it somewhat depends on the implementation of the wireless client, but only in regards to Linux and OpenBSD, anybody know what the status on the Windows implementation is?


>anybody know what the status on the Windows implementation is?

Seconding this. I wonder if it is something fixable at the OS level, or if individual WiFi drivers need to be updated too.

Would love to see someone throw together a list of OS', Routers, and other WiFi stuff that is known to be patched/unpatched/unknown.


Answering my own question: looks like someone is tracking it over here: https://char.gd/blog/2017/wifi-has-been-broken-heres-the-com...

Microsoft's status is unclear at the time of writing.


I'm not sure that there even is "Windows implementation" of this. For a long time each driver implemented it's own 802.11 stack.


I believe that the code for the 4-way/group key handshakes etc is part of Windows even in XP, though there was the option of using your own supplicant before Vista.


Those running the popular ESP8266 and ESP32 boards for various IoT devices: a fix has been published for the RTOS running on those boards. If you're building devices on these platforms, try to get this out to your customers as soon as possible.

http://espressif.com/en/media_overview/news/espressif-releas...


> how did this attack slip through, despite the fact that the 802.11i handshake was formally proven secure?

So, we cannot trust even formal verification?

> it’s a factual statement. In formal analysis, definitions really, really matter!

If lack of definition implies flaws in formal verification, does that mean we need an additional formal verification of formal verification?

Update:

> We need machine-assisted verification of protocols, preferably tied to the actual source code that implements them.

Haskell, here is your opportunity :-)


> So, we cannot trust even formal verification?

it's explained in the article, 2 unit tests, 0 integration tests. The formal verification appears to prove correctness of the 2 pieces independently, but not of the composition.

> Haskell, here is your opportunity :-)

you're just moving the problem to the correctness of the compiler.


> you're just moving the problem to the correctness of the compiler

But it would be a huge leap forward.


Formal verification all the way down.

I'm a complete layman in this field, but mustn't it bump against the Incompleteness Theorem at some point? There's no way to prove your definitions.


The critical point is specification vs. implementation. Any difference creates a loophole which can be abused.


> For example, an attacker might be able to inject ransomware or other malware into websites.

A reason to use https even for the most basic websites, including the ones embedded in IoT devices on local networks.


Quick googling found out that at least one guy did come very close to realizing that 4-way handshake should have hard replay protection: http://slideplayer.com/slide/5762070/

On page 30 of the presentation: "Authenticator may (or may not) re-use ANonce"


    Wonderfully Poetic Acronym
    ---

    WPA Privacy Attack!

    Wi-Fi Protected Access
    Wasn’t Programmed Appropriately
    Wads of Potential Attacks
    Wireless Public Access
    Without Prior Allowance
    Well, Pretty Apocalyptic
    WoPA!

    When Patches Arriving?
    Wardrivers, Present Arms!
    Weaponized Privacy Assault
    Wardriving’s Productive Again
    Wide-open Point of Access
    Wrecks Privacy Automatically
    Welcome, Protocol Attackers

    Where Patches, Admin?
    Worthless Privacy Attempt
    Wrong Protocol, Admin
    Won’t Protect Anything

    Weak Privacy Attempt
    Waste of Precious Attention
    Wins Prying Award

    Wired Past, Again


Does anyone know if Apple release security fixes for Airport? I know it isn’t actively developed, but you’d hope they’d release critical security fixes as they still sell them.


The main attack targets 4-way handshakes, so doesn't target access points. You should worry about updating your clients, not your AP.

Source: https://www.krackattacks.com/


Fortunately most access points will be fine, but those performing client functions (eg repeaters) will need updating.


"This is achieved by manipulating and replaying cryptographic handshake messages." so that means that the mac address has been spoofed to make the AP think that he is always talking to the same mac address.

If I'm plugged into the router directly then i should be good because it eliminates the wifi handshake. So even though other devices on the wifi network could be affected, the node that is plugged in is safe against this?


> If I'm plugged into the router directly then i should be good because it eliminates the wifi handshake.

Eh, yes, if you're plugged in you're good because it eliminates Wifi, period.

For that one client, anyway.


Well yeah, you'd always be safe against these types of attacks if you're wired in. Even on Ethernet.


So if i use my wired in node as an ssh tunnel out to the "internets" to tunnel all traffic from my wifi connected nodes then this mitigates the issue till updates come through?


That's a feasible option on laptops running macOS or Linux, but not for Android clients. Running a SSH VPN (tunneling all traffic) requires root and has a severe performance penalty (which you will notice on your battery). You'd notice it on the laptops as well, but I guess that matters less.

Funny enough, OpenBSD didn't impleemnt WPA(2) for a while. Instead, they were forcing their users to use IPsec and OpenSSH instead.


Debian repos and inherently Ubuntu's repos also have wpa_supplicant 2.4, we will see if they update to 2.6 or release a patch. Probably patch before 2.6.

It would be nice if there was a rule which package repos and distros would adhere to. The rule would adapt, such as all the packages that have had a security issues, will always be required to be updated to the latest versions in the next release or sooner. As vulnerabilities are discovered, the list of packages would grow and hopefully would prevent some future attacks. Obviously it's not full proof but every little bit counts.


There has always been a rule for bug-fix and security updates:

Apply the minimum necessary change to solve the problem.

This means cherry-picking the mainline patches where possible, or back-porting them where modification is required for them to apply (and work as intended) on older releases.

Especially with older versions it often isn't possible to update to a later upstream release because that depends on later versions of other packages. The dependencies can rapidly multiply to affect tens or even hundreds of packages.

Ubuntu patches were prepared and released within 4 hours of the security team being aware of the vulnerability. Same goes for Debian.


where do i go to get the patch?

i looked here and i don't know where to pick up the patch also ran update manager in my ubuntu distro but no dice :(

https://bugs.launchpad.net/ubuntu/+source/wpa/+bug/1723909 http://people.canonical.com/~ubuntu-security/cve/pkg/wpa.htm...


just dropped, woohoo :)


OpenVPN doesn't require root. You can use your own server or find a trusted commercial provider. I recommend airvpn, https://airvpn.org/?referred_by=287899


I had my home wifi configured like that for few years. AP without any security wired into network that was firewalled such that it only allowed ICMPv6 and OpenVPN. Sadly this worked ten years ago, but is completely unusable for various IoT-ish devices (for me, Wii was the device that made me to switch to WPA2-PSK)

Edit: also at that time this was at least subjectively significantly easier to setup than wpa_supplicant ;)


Yep! That's what I'm doing right now.


*even on WEP Is what I meant to say. Haven't had coffee yet, sorry.


Do we now need WPA3?

No, luckily implementations can be patched in a backwards-compatible manner.


It's a pity this didn't completely break WPA2 like how WEP was broken, now it will be years before there's any new security developments. Things like management frame authentication and dynamic client keys for open networks would be big improvements for the majority of use cases.


The problem is that tons and tons of devices will not receive update until they die.


Don't you only need to patch one end of the communication? Eg if phones are patched, they're safe, even if the AP is not. Then again, I didn't read the attack fully, this might be a client-only problem.


Tons of phones won't be patched. Android phones generally only get system updates for a small portion of their lifespan.


You can mitigate with a vpn.


How do I install a VPN on my IoT lightbulbs?


My understanding is that only the light bulb's traffic will become decrypted. If you see it go from blue to red, without your consent, then you'll know. Otherwise, the Wifi password is still safe.


How many BT and Virgin home routers will be patched? Somewhere around 0?


Both Virgin & BT force upgrade consumer routers overnight.


Latest Virgin upgrade was a year ago, so I'm not holding my breath...


Mine last updated a couple of months ago, around the end of August. (V2.01.12, superhub2ac)


Do I have to update the both the AP and the client or is one of them enough?


Client may be enough. Depends on the AP and only the manufacturer can accurately answer that question.


Anyone already posted "2 unit tests, 0 integration tests"? [1]. It's funny, because 4WS actually had security proofs, which considered all the pieces in isolation, but never in symbiosis.

[1] https://gfycat.com/HotOrangeCoypu


Here is the commit that landed the fix for LEDE today:

https://git.lede-project.org/?p=source.git;a=commit;h=bbda81...


This is not an end-of-the-world type vulnerability.

1. Does not affect long-term credentials - certs, wifi passwords are still safe. Rather, confidentiality (secrecy) from client --> AP is affected, and in some cases packet forgery is possible (integrity).

2. Actually accomplishing this attack, for now, requires special and expensive hardware (med to high range SDR gear). Its also not that reliable outside of a lab environment.

3. Everything you care about _should_ be going over TLS, which mitigates all effects of this attack. If it isnt, fix it.

This is a great moment for you to fire up wireshark and audit the traffic going over your wireless link. If its not adequately protected and you care about it, fix it.


Special gear isn't all that expensive. From a cursory view of the video and a skim of the paper, they're using hostapd, which means that they simply need a wifi adapter with a sufficiently-good driver. Anything with the RTL8188CUS seems to be pretty nice (that's what I'm using in a raspberry pi project, myself).

So - I agree with you that there's a barrier to entry, but it's not that big of a barrier.


Agreed. Not specialized equipment. Just something that lets you shoot raw frames and run as a WiFi AP. I have 2 such USB WiFi dongles in my bag for setting up such environments.

Give: http://www.tp-link.com/us/products/details/cat-5520_TL-WN722... a shot. It will let you setup an AP in a VM easily, etc. It is our currently anointed dongle for VM hosted MiTM setups.

I haven't tested it, but it should work fine if they are just using mods to hostapd.


Note that there are two versions of that dongle. The version 1 has Atheros Chip[1] with the best wi-fi support but version 2 has a Realtek chip[2] with subpar support. AFAIK it isn't possible to get version 1 of the dongle today.

My suggestion is Alfa AWUS036NHA (The last three letters are important!) which has AR9271 chip with great support with the ath9k driver.

[1]: https://wikidevi.com/wiki/TP-LINK_TL-WN722N

[2]: https://wikidevi.com/wiki/TP-LINK_TL-WN722N_v2

[3]:http://www.ebay.com/itm/ALFA-AWUS036NHA-802-11n-Wireless-N-W...


Hmmn, I wasn't aware. I will have to check and see, but it is probably I have a v1, since I have had it for a while. It has always taken a bit of effort and hunting to find WiFi cards with good chipsets that support monitor mode and AP modes painlessly (for WiFi frame capture, etc.).

Thanks for the info... I am a little disappointed, the ones we have been using are all very reliable and worked out of the box, so we must have the Atheros ones. It also seems impossible to order the older version specifically.


>It also seems impossible to order the older version specifically.

Yep. I've been trying for a while, but couldn't be sure if what I was looking at was version 1. Especially since the Alfa alternative is bulkier.


Arch Linux Wiki and Amazon comments are sometimes the best resources for hardware compatibility and reviews. Sort of amusing.

Also, confirming, everyone even as recent as a few months back seems to have gotten a 1.0 or 1.1 version, but newer ones are now 2.0

https://imgur.com/a/jcnbE (one from my bag).


On second read, timing requirements arent a thing if you are spoofing the network (obviously) so for that attack vector you don't need specialized hardware. (wont let me edit :( )


I do think it's an end-of-the-world type vulnerability, at least as far as Wi-Fi goes.

1) The paper claims confidentiality compromise allows the attacker to hijack a tcp connection: "allow an adversary to decrypt a TCP packet, learn the sequence number, and hijack the TCP stream to inject arbitrary data", this on all cases, even in the cases where it doesn't allow forgery (CCMP)

2) There's no such claim on the paper and according to the researcher, exploiting this on Android and Linux is trivial. Apparently also macOS. Did you see the video on their website?

3) There's no way for you to control this (apps, https stripping, for instance). Most importantly, there's no way for the average user to control this, short than using a VPN.

Again, as far as Wi-Fi security goes, seems pretty end-of-the-world to me. I don't think the huge attention this is getting is unwarranted.


Sky-is-falling is FUD:

The attack is a standard break exiting secure TCP connection and trick the target to re-create it to a host controlled by the attacker via arp poisoning or route hijacking. After that either convince target to accept a bogus cert or redirect to insecure connection. In the former case the issue is that browsers have way too many root CAs included in them and those CAs can issue certs for any domain; the issue in the second case is that users are not being paranoid enough.


That's not the attack at all. And there's nothing standard about it.

The attack is the fact that someone couldn't do this you're describing on any WPA-2 protected Wi-Fi network before, and now they can.


You have two school of thought here... optimist vs pessimist.

Remember that the attack affects mostly client implementations therefore still needs proximity to victim(s), this makes most of the end-of-the-world type scenarios impractical (they even state these on their QA) and leaves exploitation to direct/APT-groups alone.


Well I did mention it's "an end-of-the-world type vulnerability, at least as far as Wi-Fi goes".

I don't think it's a lot of consolation saying something along the lines of "Wi-Fi security is broken, but it's not so bad because it's Wi-Fi"


You should read "Silence on the Wire: A Field Guide to Passive Reconnaissance and Indirect Attacks" by Michal Zalewski to expand your universe on things you should be afraid pal.

Interesting book that can really burst your bubble on how bad things are and yet we are still here.


Yeah I've had 'Silence on the Wire' for awhile - brilliant book, although I confess I haven't ever been able to sit down and really read it end to end. But I'd say I'm familiar with the topics he talks about.

I'm not sure how that compares to the fact that WPA2 is completely insecure and trivial to decrypt on Android as "no, that is bad". Except maybe in a "well who trusts Wi-Fi security anyway?" to which I'd reply: "Actually, a lot of people. Including people on this thread".

I actually buy the argument that the RSA issue that affects YubiKey, that was announced today, is perhaps more important since it's harder to mitigate than using a VPN, but I don't know how bringing up silence in the wire makes this any less important.

Again, I haven't fully or detailedly read the book, so I could be wrong about that I guess.


> 3. Everything you care about _should_ be going over TLS, which mitigates all effects of this attack.

This is probably the biggest misconception.

Many, many websites and APIs don't have HSTS enabled to force all connections to use TLS.

The author demonstrates using sslstrip to downgrade the connection of match.com to steal credentials.

How many people watch the green "secure" indicator in the URL bar to ensure it doesn't change mid-session?

How many thousands of apps don't even have such an indicator to observe?

How many millions of phones and APs will never get patched?

This is a severe vulnerability.


> Many, many websites and APIs don't have HSTS enabled to force all connections to use TLS.

True. Yet another reason for us to push for it.

I have a chrome extension that sets the background-color of all form fields to red if the site it was served on or the ACTION attr are not https.

That said, pretty much every website in my day except for casual reading is pinned to TLS. APIs are the notable exception you pointed out, but otherwise HSTS is quite widely used, and especially effective with preload lists.

> How many thousands of apps dont have this indicator to observe?

Sure there will be some, but your standard Java apache client (along with 99% of the libraries used in Apps) dont have this kind of downgrade behaviour. If they expect validated https, they will fail without it.

> This is a severe vulnerability.

Yepp :D Not the end of the world. I think the main fallacy here is the implicit assumption that the link layer is secure. That has never really been the case and a broken wifi model is merely one more testament to this fact.


> How many thousands of apps don't even have such an indicator to observe?

I honestly never considered that one...


No, you do not need expensive hardware such as SDR to carry the KRACK attacks. Most plain standard WLAN adapters support AP mode, which is all you need to simulate the rogue network, as demonstrated in the video.


You still need proximity to the client, a powerful SDR will definitely improve reliability on real-life environments.


I stand corrected. Rogue AP / MiTM only needs a AP with the ability to send raw packets, gotcha.


Where did you find the information that it needs SDR? I couldn't find it in the paper (but I didnt fully read it) or the website.


Speculating. I don't think SDR is needed. Logically, all you need is a WiFi card that lets you do fairly standard things with the WiFi frames.


As far as I can tell that’s dangerous misinformation.


Most IoT devices never get patched. With this vulnerability you can just drive around and spy on webcams. I mean, that's pretty bad. It's enough to make me want to build rpi wifi-wired bridges/gateways for every IoT I own (which, thankfully, isn't all that many, but enough to annoy me).


Don't forget about HSTS. The author demonstrates in the video that you can sometimes bypass TLS if HSTS is not set up.


Also doesn't it require the attacker to have access to your wifi already? If that's the case, it's a hazard for connecting in a Starbucks or on your mobile service wifi, but you would be safe on your home or corporate wifi (unless the attacker is a colleague or relative!).


> Also doesn't it require the attacker to have access to your wifi already?

No it doesn't. Watch the video. It creates a clone of your network and tricks the victim's software stack to connect into it.


Nope. The man-in-the-middle attack it uses involves forwarding and replaying encrypted packets without decrypting the contents, so all it requires is that the attacker be within range of your wifi; they don't need any of your keys or passphrases.


Totally safe, until your local war-driver sniffs out your insecure device and comes back at two in the morning to upload illegal content via your ISP.


Except the attack doesn't get you access to their wireless network. It allows you to redirect someone from their wireless network to your own (spoofed) wireless network and then you can snoop the traffic.


If it works in one way what reason is there to think it won't work the opposite direction? This flaw is in the protocol itself.


You're freaking me out with that scenario


So, even if I'm using the extension HTTPSEverywhere I'm safe?


HTTPSEverywhere will not magically upgrade a site that doesn't serve HTTPS to HTTPS. If you connect to a site that doesn't support HTTPS, you are vulnerable.


Oh I see, Thanks for the answer! For what is useful the extension then?


The extension makes you use the HTTPS connection when the site you are connecting to is known to support HTTPS.

Websites can automatically redirect to HTTPS if the client connects on http, but many websites don't redirect


It has the option to block HTTP traffic, making sites that don't support HTTPS unusable.

You could create a separate "secure" profile and feel safe that all traffic is secured, while still being able to browse HTTP in another profile, for instance.


Theoretically. The extension can refuse to load sites that aren’t using HTTPS, but the real flaw is sites that use SSL instead of TLS. Attackers can reject SSL but they can’t do anything about TLS, so the security of your browsing would be affected by the website’s HTTPS configuration, and whether they use SSL, TLS, or both (the only 100% safe method is TLS only). I know for most people disabling SSL and going TLS-only isn’t high on their list of priorities so I expect this attack to be very successful on the internet as it is right now.


tcpdump -nw - | strings -n 6

Credit to https://twitter.com/marcan42


What exactly should a person who uses this command be seeing or looking for that would indicate a problem or not?



I'd assume that all vendors would be busy getting KRACK fixes out, but instead ASUS focuses on adding Facebook functionality to their routers..

https://www.asus.com/Networking/RTAC68U/HelpDesk_BIOS/


There is a lot of talk along the lines of "at least important things use HTTPS now." Well, for WAN traffic, that's largely true. For corporate/university intranets, not so much. For example, SMB (network shares) only supports encryption as of version 3.0, and a server admin who disables handshaking below v3.0 is disallowing Windows clients below 8.

    if (win7 && smb && wpa2) then vulnerable to password theft


the post https://www.theregister.co.uk/2017/10/16/wpa2_inscure_kracka... mentions:

> As Hudson notes, the attacker would have to be on the same base station as the victim, which restricts any attack's impact somewhat.

If I understand it correctly then there has to be a connection already present for the attack to work?


> For ordinary home users, your priority should be updating clients such as laptops and smartphones.

So even if you patch all the devices in your house/company/whetever you can't be safe... Then yours aunt's un-patched android connects into your wifi and put all your network in risk. Or maybe that not-so-old security camera or SmartTV that will never be patched.

Time to move all those guys to an isolated vlan...


The WPA key is not recovered so it should only affect the unpatched client.


I wasn't even thinking about the key. I was thinking about compromising a device using a targeted tcp hijack. And then using that device to compromise the rest of the network. Long shot tough.


Sure that might be possible, for example to serve some malware to that device. More likely is that a device you allow onto your wifi already have some malware from earlier.


This might be a good time to switch off main SSID and exclusively use Guest SSID which is a feature in many routers. At least it would help to isolate wired computers away from the common wifi network. One could then safely use weird computers for sensitive communication.

It might also be a good time to invest in NIC dongle manufacturers considering how many systems only ship with wifi.


This once again shows why complexity is evil in cryptography. The likelihood of a vulnerability in a cryptosystem increases exponentially with the number of state transitions it has.

http://cr.yp.to/talks/2015.10.05/slides-djb-20151005-a4.pdf


Can somebody explain why replay attack protection of WPA2 is not working in this case? Aren't any out of order packets thrown away?


I'm not sure I understand the concern with breaking WiFi. Okay, so you're vulnerable to snooping and injection by people in the same coffee shop or your neighborhood. But you're already vulnerable to that from anybody on the Internet between you and the site. HTTPS solves both of these.

Am I missing something?


There's a treasure-trove of data and metadata in your DNS requests, destination IP addresses, and traffic analysis. You're not just vulnerable to snooping either, but also traffic injection. You're exposing a much larger attack surface by losing the layer of security provided by WPA2.


Lots of sites aren't on https yet. Some never will be. Additionally there could be intranet communications no one bothered to secure, so maybe the company email's POP server was never configured to send text encrypted.


Yeah, Intranet is a good example.


In practice, the points where you're most susceptible to MITM/interception are the initiation and termination endpoints. Most of these connections terminate in a data center/hosting providers, which should generally have decent security and plenty of disincentives for snooping.

However, homes, coffee shops, airports, etc -- those are places where someone could execute this attack successfully.

You're right that any transit router could intercept your connection and do these things but in practice those routers are "reasonably" well secured.


I don't think that I have ever seen any colo-provider which did any meaningful monitoring of traffic on their datacenter-wide customer facing ethernet networks. Usually they struggle to sort out operational issuess caused by misconfigured customer hardware (eg. duplicate IPs or non-converging STP due to incompatible configuration of customer's switch), much less have capability do detect or even prevent targeted attack.


Security in layers, and because locally-hosted content may not always be sent over HTTPS (because people love to cut corners).


How encrypted is your average home network, really?


Yes, you miss ssl stripping.


Some details here on the hostapd security disclosure page: https://w1.fi/security/2017-1/wpa-packet-number-reuse-with-r...


Looks like mikrotik already fixed theirs a couple of weeks ago as well: https://forum.mikrotik.com/viewtopic.php?f=21&t=126695


How are fullmac devices patched? Don't they require firmware flashing or some sort?


Just a FYI. Certain enterprise access points that implement counter measures against rogue APs should be able to thwart this attack. Here is a link to documentation of the feature in one such vendor’s products:

https://docs.ruckuswireless.com/unleashed/200.5/t-EnablingDi...

The AP will begin broadcasting deauth frames against the rogue AP as long as it sees it. There are probably some edge cases, but I would expect either the rogue and real APs will fight over the clients indefinitely (thwarting information leaks) or the attack code will trigger an exception and crash because what programmer would expect the client to immediately disconnect?


This project seems to enable the same thing on commodity hardware:

https://github.com/moha99sa/EvilAP_Defender/wiki

Setup some RPis with WiFi dongles and you can likely make a perimeter defense. IANAL and I do not know how the FCC would view this, although I imagine it is not worse than the Ruckus APs that can do this, yet still passed their certification.

This is not a substitute for patching devices, but it might help somewhat with devices that will never be patched.


So is there any precautions I should take with my macbook and iphone?


Not really just use LAN over WLAN and wait for patches for both of your devices


Now, the OS of processor management units like Intel AMT and the AMD equivalent also incorporate wireless stacks. What are the chances of them becoming effectively fixed?


Can I use MAC whitelisting to mitigate the attack?


Not really. As far as I can tell, the attack basically requires spoofed MACs anyway because the keys are derived in part from the MACs, so whitelisting won't get you much benefit if any at all.


No. You can't even use MAC white-listing to prevent unauthorized devices from connecting to your access point.


MACs are easily spoofed.


Nope,

This will fool your client (eg: your phone) to connect to it, before it reaches your access point, then forward packets to your AP (basic man in the middle).

On the other hand, I am wondering if it manages to correctly forward packages back to the AP if that has MAC filtering on...


Only against an incompetent fool.


Interestingly, the paper mentions that CCMP is affected less than TKIP/GCMP, but not the FAQ.


One thing to clarify, does the attack require the attacker to be already connected to the AP?


No, that was explained in the demonstration video.


Already patched in Arch linux. Just updated all my systems.


This was fixed in OpenBSD back in August.


Assuming neither client and router is upgraded, can this be mitigated my making the SSID hidden, so the attacker wouldn't know which AP to spoof?


The attack doesn't need to know the SSID to find and replay the affected packet.


hidden APs can be easily seen


Thanks, didn't know that :)


Is it possible to adapt these attacks to HTTPS protocol?


Oh goodie.


So how will this be fixed?


It seems that OpenBSD already patched their source code and that wasn't to the likings of the researcher. In the future he will now delay notifying OpenBSD of vulnerabilities.

Why did OpenBSD silently release a patch before the embargo?

OpenBSD was notified of the vulnerability on 15 July 2017, before CERT/CC was involved in the coordination. Quite quickly, Theo de Raadt replied and critiqued the tentative disclosure deadline: "In the open source world, if a person writes a diff and has to sit on it for a month, that is very discouraging". Note that I wrote and included a suggested diff for OpenBSD already, and that at the time the tentative disclosure deadline was around the end of August. As a compromise, I allowed them to silently patch the vulnerability. In hindsight this was a bad decision, since others might rediscover the vulnerability by inspecting their silent patch. To avoid this problem in the future, OpenBSD will now receive vulnerability notifications closer to the end of an embargo.


This feels like some kind of prisoner's dilemma game theory problem. By defecting from the embargo, OpenBSD gained potential security for its users at the expense of all other users. Overall, this is a loss, unless you use OpenBSD. I have to agree with the researchers on this one; OpenBSD acted selfishly here.


Read that again. We asked to commit without revealing details, he said yes, that's what happened. I guess he changed his mind about that after the fact, but nobody promised not to commit. We didn't "defect" from an embargo unilaterally.


Perhaps "defect" is the wrong word given the circumstances, but the result is the same. There's a good reason for the embargo: this all takes cooperation, as it's not a Nash equilibrium. I still agree with their decision not to include OpenBSD so early in further disclosures, given Theo's short-sighted statement.


> Perhaps "defect" is the wrong word

It's precisely the correct word. Prisoner's dilemma are simple, mathematically. This was one. OpenBSD defected. The joke's on the security researcher, though, since this doesn't appear to have been their first time [1][2].

Robert Axelrod outlined, in his 1984 classic The Evolution of Cooperation [3] four requirements for a successful iterative prisoner's dilemma strategy. One is retaliating. Security researchers are letting OpenBSD play an iterating game as if it's an N=1, i.e. they're not retaliating. Given the community is playing "always cooperate," OpenBSD's best move is actually "always defect".

[1] https://lwn.net/Articles/726585/ thank you 0x0 [a]

[2] https://lwn.net/Articles/726580/ thank you 0x0 [a]

[a] https://news.ycombinator.com/item?id=15481980

[1] https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation


So does the simple mathematical treatment also include language like "the joke's on ____”? Or was that more of a philosophical interpretation of yours?

Real life is messier than any model.

https://www.quantamagazine.org/in-game-theory-no-clear-path-...


Both your [1] and [2] seem to conclude that violating the embargo had no significant ill effects: "since... the underlying issue was already publicly known, OpenBSD's commits don't change things much." If "defecting" causes no problems for the other participants, does it actually count as defecting? (And if not, how is this a mathematically simple prisoner's dilemma?)


Nice analysis. It definitely seems to be the case.


I mean, I agree too. I sleep better not worrying about bugs that can't be fixed.

I'm mostly here just to correct misstatements of facts. You're welcome to your own interpretation, game theory optimization, etc.


Wellll to be fair, I'm sure if the researcher said no, he wouldn't have committed.


From what I've read, I don't see why everyone is giving you a hard time about this. It sounds like you did exactly what he agreed you could do, and then he changed his mind.


Sounds like a "we technically respected the embargo, just not in principle" sort of thing to me.


We're not mind readers. If he says it's ok, we think it's ok. If other vendors have fucked up months long patch cycles, that's their deal, not ours.


Now that it's more clear what role disclosure deadlines play in cooperating with security researchers, it probably makes more sense to just cooperate than point fingers.


What part of this commit description is "not revealing details"? https://ftp.openbsd.org/pub/OpenBSD/patches/6.1/common/027_n...


All of it? The paper describing the attack is longer than one sentence.



He said it's ok this time, but won't be so in the future (pretty clear from the future action). So your decision is still subject to the criticism.


They agreed, but now they regret the decision and wouldn't make it again. To prevent themselves from doing so, they will not speak with OpenBSD until later in the process.


What's the word for pressuring a person until they make a decision they immediately regret?


It's a loss, even if you use openbsd. If you break the embargo, you won't be notified in advance anymore. Basically, you get an advantage once but will get several losses for a very long time. Overall it's bad, even for OpenBSD users.


The researcher's reaction is correct. OpenBSD maintainers' lack of patience may have led to this vulnerability being discovered and exploited by other people.


The researcher’s lack of full disclosure may have lead to this vulnerability being discovered and exploited by other people.


Also an embargo lasting months seems excessive.

You also can not guarantee me that no one who gets this information early is not working for a bad actor.


I'm just so glad this long embargo meant that everyone had patches ready to go as soon as it expired! Oh wait, they don't. Good job CERT.


I wonder when the NSA and CIA was responsibly informed about this vulnerability.


OpenBSD wifi maintainer here.

I was informed on July 15.

The first embargo period was already quite long, until end of August. Then CERT got involved, and the embargo was extended until today.

You can connect the dots.

I doubt that I knew something the NSA/CIA weren't aware of.


In other words, its malfeasance by the security community for holding out.

There's only a few courses of actions. One is to sit quietly and let everyone eventually do the solution. And that doesn't work. No fire under peoples' asses, and the work is delayed.

The other, is to release it promptly. Then, at least we can decide to triage by turning down X service (even if wifi), requiring another factor like tunnel-login or what have you.

But truthfully, defect in a Prisoners Game played out here was the best choice. The rest of the community is "agree".


No one should care about a community that agrees that releasing silent patches is a good idea. This is exactly the same behavior that created the need for full disclosure in the first place. And no, there aren't just two options nor are processes binary. It's rather mind boggling how "the community" has managed to go full circle in such a short time and themselves become the opinionated people they were supposed to be the alternative to.


Really makes me wish you'd told the world. I know all the arguments against that, but this sort of thing is no good either.


Yes, but that would result in them not getting notified for any other vulnerability.


As far as I understood, this attack has no client-side mitigation that could be employed other than treating every wifi as an open network. The attack might already be known to hostile actors or may have become known during the embargo, but full disclosure without an embargo would guarantee that clients are at risk without mitigation. An embargo at least gives time to prep patches and protect at least a portion of the clients.


Either there is a possibility for patches to be prepared during an embargo or there is “no client-side mitigation”, you can’t have both. From reading the rest of this thread, it appears that it is quite possible to patch this on clients such that, if you are using a patched client, you are safe. Disclosing earlier would have lead to more people having patched clients earlier and hence being safe.


Patching the client is a fix. Mitigation would bea config setting that makes me safer (disable some unused functionality,...). So yes, you can have both.


That’s like saying that prior to introducing seatbelts, we should have allowed for a period of time to glue people to their seats because it is preferable to have a mitigation they can apply themselves than a fix the manufacturer has to put in.

If you don’t limit mitigation to "a config setting" (and why would you?!), a patch/new version is the best mitigation you can get.


I limit mitigation to a config setting because that’s what affected clients can do in this case. Everyone patching wpa_supplicant on their android handset is just not going to happen and it takes time for vendors to roll out patches.


> As far as I understood, this attack has no client-side mitigation that could be employed other than treating every wifi as an open network.

I've been doing that for years and recommend others do so as well.

The rise of HTTPS nearly everywhere helps mitigate things a bit. This same type of exploit 5 years ago would wreak havoc exploited at the local Starbucks WiFI.


You got everything wrong. If big vendors are unable to patch their proprietary products in an acceptable time, that shouldn't put others at risk. Users shouldn't choose their products...

Think about it in a different way: What if a vulnerability was discovered in TLS and FOSS implementations patched it, but there is an embargo for supposedly protecting some banking software? What if NSA/CIA/other agencies find out about it (they would know immediately) and use it to target users/activists?


This is why embargoes have deadlines. To make the necessary trade-off between "patch as soon as you can, potentially jeopardising the safety of users -- even users of non-proprietary projects" and "wait for everyone to be ready before you patch -- which also jeopardises users". The embargo system deals with this by forcing everyone to agree on a date, and if someone patches after that date then too bad. You may disagree that the deadline was so long, and that could be a fair criticism.

But pretending as though co-ordination of any kind is somehow bad (and then resorting to emotional arguments and so on) is pretty reckless.


I have seen and participated in this disclosure debate for 10 years now. I have come to the conclusion that, in the long run, the least harm approach is full disclosure. There isn't any wiggle room. There are no shades of grey. The whole coordinated response movement is misguided. There are some limited circumstances where it can make sense to delay disclosure, such as creating an imminent threat to human life, but generally full and nearly real time disclosure results in safer software sooner for end users without putting them at some unknown, but high risk level.


3 months is more of a joke than a reasonable time, but one can argue about that if he wants...

> even users of non-proprietary projects

Actually many FOSS projects get only notified on the disclosure date.

Hiding the vulnerability for such a long time makes more harm good. The vulnerability can potentially be exploited by security agencies that necessarily know about them and could also be leaked to a bad actor by an employee of one the vendors.

Hopefully WPA2 isn't that important, but potentially security sensitive users trusted something that was known by some to be vulnerable for 3 months! Bad actors could have used it against them.

The embargo resulted in potentially bad actors knowing about the issue, but not vulnerable users.


The state actor should be least of your worries compared to the millions of script kiddies would could use the vulnerability once it is disclosed publicly.


No as I would know about it by following security news?


Are you so great that you know all the vulnerabilities all the time since the second of disclosure?

Do you seriously expect the other billions of people on the planet to be that great too?


For most of them the day after, when I get a notification from my RSS app...

No. I also don't expect them to choose device based on security. That is very bad as vendors won't care about patching their older devices (look at Android devices, home routers...) and vendors won't care about patching their flagship devices fast as they have the possibility to request very long embargos.

Making compromises for those vendors and giving more time for security agencies and other bad actors to silently exploit the vulnerabilities (where FOSS projects would have made patches for users that care) is not the way to go. That philosophy actually makes everybody less safe.


What if [...] is FUD. What if Theo de Raadt works for the FSB? We need to work with the facts.

If you don't agree with an embargo and decide to break it, that's on you. But the consequence is that you shouldn't be surprised if next time you're informed later, or not at all. What OpenBSD proponents and developers are doing right now, is damage control. It may work this time, it may work next time, but it won't keep working every time so pick your fights right. It isn't the first debacle OpenBSD has with full disclosure either (hint: OpenSSH).

There are also millions upon millions of devices which won't get patched. Given the vulnerability is apparently the most vulnerable on Linux and hence Android, do you think all the smartphones running Android 4.3, 4.4, 5.0, and 6.0 will be patched [1]?

[1] https://en.wikipedia.org/wiki/Android_(operating_system)#Pla...


No, I don't. And its stupid, because it doesn't have to be that way. But telecoms have complicated the situation with their greedy firmware reinforced planned obsolescence.


But have the other OS makers released patches already?


For reference, the OpenBSD patch in question released on August 30: https://ftp.openbsd.org/pub/OpenBSD/patches/6.1/common/027_n...



tedunangst: "We asked to commit without revealing details, he said yes" "I guess he changed his mind about that after the fact."

The patch has obviously an explicit description:

"State transition errors could cause reinstallation of old WPA keys."

It's true, however, that anybody who analyzes the diffs would eventually figure that out, as Theo de Raadt argued.

My conclusion is also that the real error was even wanting to give the details to him at that moment, as there's apparently a history of him not respecting embargoes.


Oh, that's the problem? That's too much information? Well, shit.


I still fail to get what you wanted to express with your comment here. I've just quoted two sentences from another comment of yours on the same page, have you understood something else?


Not the first time OpenBSD does not respect embargoes, for example https://lwn.net/Articles/726585/ and https://lwn.net/Articles/726580/


"As a compromise, I allowed them to silently patch the vulnerability." The way I read that they broke no embargo


They were pressured by OpenBSD to do so, and regret it. That doesn't mean they broke embargo, but it also doesn't reflect well on them. Do you think Theo would've respected the embargo if they had said "no, do not patch until the embargo date?"


Yes. He would have tried to persuade them, perhaps cut out the researchers to persuade CERT.


Who says they were pressured?


A bunch of dudes on a linux mailing list lack the authority to prevent openbsd from fixing things.


True, they don't. However, this researcher has the authority to not notify the openbsd team in advance any more and he already announced that he'll keep his cards closer next time. What happens if sufficient researchers come to the same conclusion?


What happens if a vendor or researcher is in bed with the NSA and they use the exploit while embargoed?

The whole thing is a shit show and really I'm rather more behind OpenBSD's approach.

Edit just to expand on this as someone deleted a post ....

----

It's slightly more complicated than the prisoner's dilemma. The prisoner's dilemma doesn't account for a large facet of the problem which is being discussed here. If all the good parties participate and coordinate then we're better off. The problem is there are outlying circumstances which means that not everyone will be included:

1. If someone kicks someone out (OpenBSD) on political whim playing CYA, they no longer benefit.

2. If a party is not let in, they no longer benefit.

3. If someone is unaware of it, they don't benefit.

This turns it into a security monopoly where the big vendors get exclusive rights to embargo and exclude smaller vendors and control the disclosure process on their own schedule.

The first thing the people outside of the club find is they wake up on Monday morning and have to clear up a shitstorm of monumental proportions with less resources than the monopolised vendors who've had time to deal with it.

Then there's the assumption that the monopolised vendors are trustworthy which is 100% impossible to validate and therefore invalid.


Yeah, the hysterical part is how people think distros is leak proof. It just doesn't leak in nice public ways to allow "responsible white hats" to wag their fingers. Raise your hand if you can say you confidently know the full back channel distribution of a notification to distros.


Exactly that!

No bullshit please - you guys do a wonderful job of avoiding it and stamping on it when it does turn up. Keep up the good work :)


Ultimatum games [1] are a subset of prisoner's dilemmas. That covers Nos. 1 and 2. Assuming researchers want something from those they disclose to, it makes sense for them to cast the widest net possible while minimising the risk of defection. Balancing that optimization is a game as old as civilization.

> This turns it into a security monopoly where the big vendors get exclusive rights to embargo and exclude smaller vendors and control the disclosure process on their own schedule.

Not necessarily. It turns into a monopoly of those who can show themselves to be credible partners. This exhibits incumbency bias which in social context we call track record. It's not nearly as exclusionary as you're making it out to be.

> Then there's the assumption that the monopolised vendors are trustworthy which is 100% impossible to validate and therefore invalid

This is common in trust problems. You don't need to be 100% sure everyone you're dealing with is trustworthy to work with them because we don't live in a single-iteration game. Again, iterations of retaliation and forgiveness remove the need to have 100% certainty about a player's intentions.

[1] https://en.wikipedia.org/wiki/Ultimatum_game


Credible partners? Yeah right: http://securityaffairs.co/wordpress/56411/hacking/windows-gd...

No one is credible here. The very nature of a closed agreement of secrecy between arbitrary parties is the opposite of credibility.


I am generally ok with that. Embargoes are retarded.


Sounds like the researcher is at fault for putting OpenBSD on their list. If you cut a deal with someone who serially defects, at a certain point the onus shifts from them to your lack of foresight.


The problem isn't a "fool me once shame on...fool me you don't get fooled again", because the problem is one unscrupulous party is unscrupulous to different parties and the different parties at different times are unaware of it.


> the problem is one unscrupulous party is unscrupulous to different parties and the different parties at different times are unaware of it

Sure, but eventually you get called out on it in a public forum, like this one, and people stop giving you goodies going forward. I would consider it acceptable practice to, when considering dealing with OpenBSD (or people who are close to them), (a) withhold vulnerabilities until after the embargo date or (b) refuse to give any information unless they sign a binding non-disclosure agreement committing them to the deadline under pain of penalty. (The latter is an option because it appears, in this case, they broke the spirit if not letter of the agreement. The solution to that problem is legalese.)


Hi, I am the person you are accusing of mischief.

I didn't break any agreement. I agreed with Mathy on what to do, and that's what I did.

The fact that Mathy decided to get CERT involved and subsequently had to extend the embargo has nothing to do with me.

(edit: typo)


To be clear, I accuse you of nothing less than playing a rational response to the researcher's apparent "always coöperate" strategy. "Defect" in a prisoner's dilemma context does not mean "breach" in a legal one. (For example, an OPEC member defecting has zero legal consequences. It does, however, affect their standing in the next round of negotiations.)


'Defect' doesn't mean 'breach' in a legal situation, it also doesn't mean 'sociopath and/or economics professor' in a psychological one, but people form connotations, so be careful what you accuse. Anyway I think you're playing the PD analogy too much... But I'll play a bit too. Construct a payoff matrix. What does real defection look like? It's patching mid-July, when the patch was received, instead of waiting to the agreed upon end of August time. I see no defect here. There could only be one if, after CERT was involved and set a new date, Mathy asked OpenBSD to postpone the prior date agreement, and instead of cooperating they patched immediately for the biggest gains to their users. There is no mention of such a request, hence it probably didn't come.


I support your decision.

If Mathy was concerned, why did he wait to notify CERT? Should that not have been the first priority?


As a user I am completely fine with that.


Even when the author states that now as a result of that selfishness OpenBSD won't get notified about vulnerabilities until well after everyone else?


> OpenBSD won't get notified about vulnerabilities until well after everyone else

Which doesn't make a difference if OpenBSD still gets their patch out at the same time as everyone else. Unlike other vendors, it doesn't take OpenBSD four months to go from vulnerability notification to patch release, if you look at previous disclosure timelines they typically have a patch out in days.


What about the vulnerabilities that OpenBSD notice? Works both ways. And they have an active interest in such things and have discovered as much as any famous-for-five-minutes security researcher.


> [OpenBSD] have discovered as much as any famous-for-five-minutes security researcher

TL; DR OpenBSD acted rationally if they'd prefer to go it alone, which seems to be their culture. To their credit, it's worked pretty well so far. But you can't have your cake and eat it too. If they prefer a mad scramble after public disclosure, they'll get it. But they shouldn't get early notice from responsible researchers.


See my comment here. It sort of replies to this anyway: https://news.ycombinator.com/item?id=15482285

I don't believe that embargo is healthy or responsible! If anything its a monopolising factor.


It sounds rather like he is trying to blame OpenBSD for his own mistake. As multiple people from OpenBSD have said, he agreed they could apply the fix, so they did. He didn't have to say they could. The fact that CERT persuaded him to extend the embargo later is not their fault.


Author doesn't know what FreeBSD, Debian and OpenBSD people cooperate and share knowledge, so most probably OpenBSD developers will know about the issues, just not from an "official" email.


Furthermore by not even attempting to include OpenBSD in some embargo agreement, there's no reason for OpenBSD to not patch as soon as they hear about it. Indeed that's what seems to have happened on the linked 'evidence' about them not respecting an embargo of a linux distro group they're not part of.


i don't get how they can run a mitm attack without knowing the secret passphrase, can someone explain this in laymans terms?


Because you are replaying the packet, which doesn't need the key.


thanks that makes sense


This is too big, I doubt it's not a backdoor.


"This can be abused to steal sensitive information such as credit card numbers, passwords"

This really isn't true (because that kind of information is protected by TLS) and the article is highly disingenuous to not say so.

Nobody has trusted WiFi encryption as protection for sensitive information for more than a decade.


Demonstration video has the researcher sniff passwords from match.com, which uses TLS. The catch is they aren't using HSTS and so they are vulnerable to sslstrip.


True, using a vpn gives you control over "an" encryption layer for your traffic, relying on the sites https will always be less ideal.


Did you watch the demo video[1]? Apparently some sites (Vanhoef’s example was Match.com) are suspectible to MITM by using Moxie Marlinspike’s sslstrip tool[2].

[1] https://youtu.be/Oh4WURZoR98 [2] https://github.com/moxie0/sslstrip


"This can be abused to steal sensitive information such as credit card numbers, passwords, chat messages, emails, photos, and so on."

As much as this is a scare tactic to get people to demand vendor patches, it's been true for https for a while.

Browsers don't have any trick (that I know of) to enforce https on first connection. HSTS is defeated by simply rejecting connections to https - the user will retry the site from different devices and destroy their hsts cache in order to reach the site. Assuming the site used hsts.


All major browsers implement a HSTS preload list[1] to get around the first connection problem. Manually deleting the HSTS pin for a site is quite involved and not something I'd expect most users to do.

[1]: https://hstspreload.org/


Preload lists are not a realistic solution (you can't preload the whole internet) and a sufficiently complicated site will be subverted due to 3rd party dependencies. And does uninstalling a browser not clear the hsts cache?


Perfect is the enemy of good. A large portion of sensitive traffic is protected by HSTS today, and the preload list compresses well. By the time it'll become a problem, we'll hopefully be at the stage where HTTP is treated as insecure anyway.

I'm not certain if uninstalling a browser clears the cache (do uninstalled browsers retain their profiles?), but preloaded sites would not be affected - they're included in the browser binary. Either way, let's not act like there's a massive hole in HSTS because there's a possibility that users might go as far as reinstalling their browser to visit a not-preloaded HSTS-enabled site that's being targeted.


I'm not saying it's a massive hole, i'm saying it's an easily preventable hole that the entire industry is ignoring for unknown reasons. One simple URI change could make HSTS obsolete and fix the hole with no need for awkward workarounds and half-measures. Nobody has yet explained to me why "good enough" is better than "fixed".


> you can't preload the whole internet

"You wouldn't HSTS the whole internet, would you?"

Google: "Hold my beer..."

https://nakedsecurity.sophos.com/2017/10/03/google-is-making...

Right now it's mostly unimportant new domains, but it's a start, and they could convince other domain registrars to follow-suit.


You can't preload the whole internet, but by getting the top xx thousand you get 99% of all Chrome users traffic. Its not perfect, but it is very, very effective.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: