+1 for a good read, really enjoyed his writing style.
For those unfamiliar, Matthew Green is a cryptography researcher and professor at Johns Hopkins.
Edit: TIL John's' Hopkins ty /u/dEnigma
While I'm sure this can't take much of the blame, it sure strikes a chord. The IEEE standards process seems insanely archaic and broken in the open-source era.
This really does highlight the absolute disaster zone that the Android handset market has become as far as updates are concerned. I'm sure the Pixels will get a fix relatively quickly but almost every other Android user is going to be left in security limbo.
In general most bigger manufacturers have been somewhat decent in updating their flagship devices. With a Sony flagship from the last 18 months for example, you usually won't run more than two months behind on security updates. Samsung is similar if I remember correctly. Hopefully a big exploit like this will be enough of a kick in the butt to get manufacturers releasing security updates faster.
I totally agree with your hope that this will kick both the manufacturers and Google in the butt enough to get something done about this. I don't like our chances though!
There is a problem with handset abandonment, but this is true across all vendors, and it does not underline sequence7's claim that this is solely an Android problem.
For example, under the Consumer Rights Act 2015 in the UK the product must last for as long as a reasonable person would expect it to, and Apple's interpretation of that is five/six years (see https://www.apple.com/uk/legal/statutory-warranty/).
The Consumer Rights Act 2015 also covers unfair terms in sales contracts (and at least in Scotland EULAs are part of the sales contract, per Beta Computers (Europe) Ltd v Adobe Systems (Europe) Ltd; I don't know the status elsewhere in the UK), and it's quite likely you could just go through most of the possible contractual outs and argue they are unfair terms.
And they are more expensive than many computers you can buy on the market. 2 years support on a device that can cost $500+ isn't acceptable.
We are 4-5 years into the period where people have had sub $300 choices, so there is an alternative to spending $500+ on a device that comes with 2 years of support. Maybe not a fantastic alternative, but the $300-$500 extra that people choose to spend says something about what they care about.
You don't know that. Apple could very well release an iOS 10 update for this.
But in both cases the stated policy is that the devices will not receive any more updates, either feature or security.
Hopefully Android Orea 8 with Project Treble will stop this ridiculous trend for the rest of us. Together with the smartphone market being saturated (budget phones of 200 EUR are very decent these days), we may end up with long term support on older yet still decent devices.
It's just a word that means you paid more for having all the bells and whistles that the OEM could offer at the time, instead of going for the next-best model or such.
Fortunately, whether you can afford the best of the best with all the optional extras or a cheaper second-tier model doesn't affect the security of the device. And it shouldn't, because if you can't afford a "flagship" device, doesn't mean you can afford to get hacked either.
Unfortunately, while the security-update frequency ought to be comparable, it turns out that it's mainly comparably bad :-/
That's still truly terrible compared to Apple's legacy device support. iOS 11 and future patches still support even the iPhone 5s, a phone from 2013.
Although websites or apps may use HTTPS as an additional layer of protection, we warn that this extra protection can (still) be bypassed in a worrying number of situations. For example, HTTPS was previously bypassed in non-browser software, in Apple's iOS and OS X, in Android apps, in Android apps again, in banking apps, and even in VPN apps.
Either way, this disclosed vulnerability only involves link layer man-in-the-middle in order to collect traffic. Active manipulation of traffic (Required for TLS intercept) is more complicated.
: "Finally, although an unpatched client can still connect to a patched AP, and vice versa, both the client and AP must be patched to defend against all attacks!"
: "you can try to mitigate attacks against routers and access points by disabling client functionality (which is for example used in repeater modes) and disabling 802.11r (fast roaming)."
Well, hopefully this means no kernel patch will be needed.
You get updates every week.
Your advice is valid, but it’s important to not have a false sense of security.
I found it interesting that, in his article, he said:
"With our novel attack technique, it is now trivial to exploit implementations that only accept encrypted retransmissions of message 3 of the 4-way handshake. In particular this means that attacking macOS and OpenBSD is significantly easier than discussed in the paper"
but elsewhere it said recent versions of OS X and iOS are not impacted. I wonder if the "safe" OSes are only vulnerable to the blocking/replay but not the decryption of data?
My UniFi AP-PROs show up today so I'll make sure to update them first thing.
Also, I'm having a bit of a hard time understanding the attack. It sounds like he forces them to connect to his AP, performs the attack, then allows them to connect to the intended network with the zeroed key, THEN is able to sniff that client's traffic because he knows their key? If I understand correctly, this means he cannot sniff the whole network's traffic, only the traffic between the attacked client and the AP? This makes me wonder about the meaning of a pre-shared key, but I'm guessing the PSK is only used to setup the relationship between client and AP, and then after the initial connection/pairing the pre-shared key is no longer used...
He forces them to connect to his own AP and forwards all traffic to the destination so that the client is unaware it has been redirected.
He then forces the client to re-install the key which (on anything that is derived from wpa_supplicant e.g. Linux, Android, etc) the client has blanked out after first use, so the key it reinstalls is now all zero bytes.
He can continue to forward the traffic to the destination so that the client gets responses, but now he can decrypt all of the traffic too.
For clients that re-install the correct key (which the account does not recover in any way) the attacker has to rely on snooping enough encrypted data in order to perform a birthday attack as the key re-installation also resets the frame counters which leads to nonce-reuse which is a problem in ciphers like AES-GCM.
If you choose not to use public WiFi because you can't "trust" it, then you now need to stop using your private WiFi too (until your systems get appropriate patches).
Using a VPN is the best way to mitigate this until your device is patched, assuming you trust your VPN provider or run your own VPN.
Edit: Actually, even if you don't trust your VPN provider, you'll be protected against this attack (KRACK), given their client is implemented properly.
Unfortunately this is a big part of trusting your VPN provider. It’s shocking how bad the situation is, especially it seems on those marketed via Android apps. 
I'm sure, given the size of that list, that they tested some of the biggest players on the VPN space. I think it'd be good to know which apps were tested and didn't show any issues, especially in light of Krack and the Android bug on wpa_supplicant.
With critical bugs like these, it's certain Google will require recent devices that have enough affected users to be updated ASAP. Expect an update within in a few weeks.
Also you could install a better version of Android on your phone rather than an outdated vendor version. That will probably fix more security related issues than just this one :)
But it's very difficult to ensure that all the communications your device is making (background services, vendor apps...) go through that channel.
Reality is that DNS remains and will continue to remain a giant hole in TLS.
Relying on the efforts of unpaid volunteers doing their best to hack together binary blobs is also not the best idea...
Not all devices are supported by major ROM distributors, nor is the support guaranteed to be endless or current... (even some devices as major as the Galaxy S6 for example)
1. Beyond difficulty of porting blobs, you might well also simply get your updates from a custom ROM faster than you'll get them from the manufacturer, even if it's still supported. That in itself is an advantage.
2. Backporting updates to third party components can be simple (assuming a stable ABI/API); the easiest case is probably that of just dropping binaries from a similar phone that did get updated into a zip file and then flashing it. Look at busybox installers, for example; all you need is a version compiled for your hardware. Java components can sometimes be changed as well (see xposed). This works on desktop systems as well, sometimes: I've been able to 'fix' older games into working just by dropping a newer version of a dll into the game directory (directx, openal, etc))
3. Maybe the company is just stupid. Motorola (or is it Verizon?) has tried Marshmallow for the Moto E2 in Europe, but not in the US. I'd expand on this but I'm on mobile and I'm lazy.
In practice, everything of value should be going over TLS. If you're worried you should be using a VPN on untrusted networks. This attack, if I'm reading it right, doesn't do anything someone on your wlan or lan can't do right now via ARP poisoning and other attacks. So being on that work connection or restaurant wifi is almost the same risk level of this attack.
In a case that it is, its curious how you would inject data into a smb stream and not fail checksums from client-side chechking. Maybe its trivial to deal with this, not sure.
If the WPA2 protected wifi network is using AES, which is the most common in my experience, then they won't be able to inject any data. From the Krack website:
If the victim uses either the WPA-TKIP or GCMP encryption protocol, instead of AES-CCMP, the impact is especially catastrophic. Against these encryption protocols, nonce reuse enables an adversary to not only decrypt, but also to forge and inject packets.
If this is an actual in the wild exploitable issue, there will be patches very quickly for handsets in the support period, as quickly as there is for iOS. This has been the case repeatedly before as well.
What a weird post in general. Maybe wait to complain about this a month down the line or so? Instead it's just effectively noisy rhetoric.
The use of the word hopelessly was probably unnecessarily dramatic I agree but I'll leave it there so your comment makes sense.
Remember, 'S' in IoT is for Security.
One of the best quotes I've heard in a while.
From the source: In general though, you can try to mitigate attacks against routers and access points by disabling client functionality (which is for example used in repeater modes) and disabling 802.11r (fast roaming).
For ordinary home users, your priority should be updating clients such as laptops and smartphones.
Are manufacturers like linksys, d-link issuing patches now or will it be enough to have windows/os x/iOS/android updates enabled? Or do I need both?
Can anyone explain the timeline of releasing such significant security findings? Why is it disclosed to the public 1/2 year after submitting to review? I'd guess the (publicly funded) research behind it is a lot older than that.
e.g. Dan Kaminsky's discovery of DNS cache poisoning had a 5 month responsible disclosure embargo.
From my understanding of research at public institutions there is a long period of time and steps between finding something interesting and submitting a paper for review.
Why not disclose the vulnerability first to concerned parties and then write up a fancy research paper? Why the other way round?
Only two explanations I could come up with: Either there must be a very short time frame between identification of the vulnerability and writing of the paper or there was further research needed. Or....I don't know
It's 3 month. It's a reasonable delay if you have to alert lots of manufacturer and they need time to roll out critical patches to lots of devices.
Please stop confusing slowness with an intent to delay or ignore.
I'm not convinced it was a bad decision. Why would you want to leave your users vulnerable? It's possible that this has been exploited in the wild.
Up until now, there were no indications that this was being exploited publicly. After a flaw like this gets known (whether through a coordinated disclosure or through OpenBSD's early patch) you can be assured people will be exploiting this.
Do you both stay silent and take the minor risk of your users being vulnerable for a short time longer whilst patching and disclosure is being coordinated with all parties (-1/-1), or do you "betray party B" but get your own users secured as soon as possible (-3, 0).
I think coordination makes more sense in a flaw as big as this.
> To avoid this problem in the future, OpenBSD will now receive vulnerability notifications closer to the end of an embargo.
i.e., _explicitly signalling_ that this researcher intends to play "defect" with OpenBSD in future rounds, should future rounds occur.
In this case, it was not a short time.
Is this attack likely to generate log evidence on affected APs in their default configuration, or is it so far down the stack that no evidence is generated and nobody could refute this claim?
Anyone got any suggestions for options?
You could just ... buy an iPhone and get timely security updates for years.
EDIT: Downvote if you want, but if iOS 11 contains this security fix exclusively and not iOS 10, then an iPhone 5s bought on 20 September 2013 is going to get this fix. If Apple release an iOS 10 update and you bought an iPhone 5 on 21 September 2012 you're covered too.
Apps should be self-contained in their bundles, and may
not read or write data outside the designated container
area, nor may they download, install, or execute code,
including other apps.
And section 2.5.6:
Apps that browse the web must use the appropriate WebKit
As far as I can tell, you can install a web rendering engine that is not the built-in WebKit, as long as you only use it for HTML/JS that come with your app. At that point the JIT caveat applies.
But regardless, with your own device, you can run whatever code you want on it.
> (if it is not against the app store guidelines)
But it has very few kernel security patches: https://cve.lineageos.org/android_kernel_motorola_msm8610
I'm using it successfully with LineageOS 14.1 (Android 7.1.2).
It attempts to detect root or modifications to the ROM by malicious software.
Certain newer devices have secure boot attestation that may cause SafetyNet to fail unless spoofed to be a different device which does not have such attestation.
Also, neither root not an unlocked bootloader is required to make "proper backups". Some data actually can't be backed up, and for some data there is no point in making a backup. If the goal is to be able to restore the system to a specific, known state, a bit-for-bit image backup of the entire filesystem is just one way to accomplish the task.
So in other words "yes, that is a requirement that will eventually be on all android phones"? Am I misunderstanding something? Older phones being an exception does me little good going forward.
> Also, neither root not an unlocked bootloader is required to make "proper backups". Some data actually can't be backed up, and for some data there is no point in making a backup. If the goal is to be able to restore the system to a specific, known state, a bit-for-bit image backup of the entire filesystem is just one way to accomplish the task.
The last time I tried adb backup and restore, it was a mess. Multiple apps like Skype had no data. And authenticator explicitly opts out of being backed up.
Titanium backup, on the other hand, works perfectly.
Ideally I would just have a rooted phone, but then safetynet complains, and I can't even use Netflix and pokemon. As an alternative I could accept an unrooted but unlocked phone, and root it only when making and/or restoring backups. But having neither is a big hassle.
To date it means that it's very possible to bypass any protections put on this though - I believe this may even be possible without spoofing the device in this way, but in any case, Magisk works on any device available today.
It's really not the same as being free of annoying and unhelpful restrictions.
I'm actually persuaded that I don't need terminal root access on a device (except for system debugging), but rather a firmware signed with my own release keys, and apps that need privileged access baked in.
Also fairly actively developed and supports a wide range of devices.
'There is no evidence that the vulnerability has been exploited maliciously […].'
That is probably about to change …
The client is forcibly disconnected from the WiFi network and reconnects to the attackers network instead.
The attacker doesn't need to know the WPA2 password but it accepts the connection setting the encryption to zeros.
The client thinks it is connected to the original wifi network and continues as normal.
Wifi traffic is intercepted and unencrypted.
There's no need for a second AP in all this, just someone in range of the client who can replay packets to the clients.
(Good TLDR here: https://blog.cryptographyengineering.com/2017/10/16/falling-... )
How would you drop packet 3 without a new AP?
> Note that the adversary cannot replay
an old message 3, because its EAPOL replay counter is no longer
And a related update from the TLDR post you originally referenced (which I believe is causing confusion):
> Update: An early version of this post suggested that the attacker would replay the message. Actually, the paper describes forcing the AP to resend it by blocking it from being received at the client. Thanks to Nikita Borisov for the fix.
The client is tricked into moving to what it thinks is the same WiFI network running on a different channel, but is actually the attackers network instead.
> The attacker doesn't need to know the WPA2 password but it accepts the connection setting the encryption to zeros.
The attacked doesn't need to know the WPA2 password and (for Android and Linux clients) the client then defaults to an encryption key of all zero bytes.
> The client thinks it is connected to the original wifi network and continues as normal.
> Wifi traffic is intercepted and unencrypted.
Wifi traffic is intercepted and can be decrypted (since the encryption key - all zero bytes - is now known).
... if transmitted over plaintext http
The only protection here is HSTS (which is not enabled by most websites, but major ones like banks will usually have them) and manually typing https:// in your address.
CUA does have it - https://www.ssllabs.com/ssltest/analyze.html?d=ob.cua.com.au
Bankwest does but has some awful problems elsewhere - https://www.ssllabs.com/ssltest/analyze.html?d=ibs.bankwest....
But yeah, Westpac and NAB don't, and in addition to the ones you tested, ANZ and St. George don't have it either. That's pretty unacceptable really.
It might be worth it.
I remember spending hours trying to figure out why Google Adsense wouldn't render correctly. In the end I figured out that it was Adblock's fault :))
Is this issue any different to using open wifi at a cafe, which many many people do, relying on HTTPS for their security? (This is an honest question)
The risk is that you may do things on a protected network assuming it really is protected - this is more of a thing for organisations rather than consumers, for example organisations might have unprotected services accessible over their office wifi.
"Because Android uses wpa_supplicant, Android 6.0 and above also contains this vulnerability. This makes it trivial to intercept and manipulate traffic sent by these Linux and Android devices. Note that currently 41% of Android devices are vulnerable to this exceptionally devastating variant of our attack."
It seems like vendors need to eventually come to some consensus on how to change the protocol instead of each fixing it in their own way.
Seconding this. I wonder if it is something fixable at the OS level, or if individual WiFi drivers need to be updated too.
Would love to see someone throw together a list of OS', Routers, and other WiFi stuff that is known to be patched/unpatched/unknown.
Microsoft's status is unclear at the time of writing.
So, we cannot trust even formal verification?
> it’s a factual statement. In formal analysis, definitions really, really matter!
If lack of definition implies flaws in formal verification, does that mean we need an additional formal verification of formal verification?
> We need machine-assisted verification of protocols, preferably tied to the actual source code that implements them.
Haskell, here is your opportunity :-)
it's explained in the article, 2 unit tests, 0 integration tests. The formal verification appears to prove correctness of the 2 pieces independently, but not of the composition.
> Haskell, here is your opportunity :-)
you're just moving the problem to the correctness of the compiler.
But it would be a huge leap forward.
I'm a complete layman in this field, but mustn't it bump against the Incompleteness Theorem at some point? There's no way to prove your definitions.
A reason to use https even for the most basic websites, including the ones embedded in IoT devices on local networks.
On page 30 of the presentation: "Authenticator may (or may not) re-use ANonce"
Wonderfully Poetic Acronym
WPA Privacy Attack!
Wi-Fi Protected Access
Wasn’t Programmed Appropriately
Wads of Potential Attacks
Wireless Public Access
Without Prior Allowance
Well, Pretty Apocalyptic
When Patches Arriving?
Wardrivers, Present Arms!
Weaponized Privacy Assault
Wardriving’s Productive Again
Wide-open Point of Access
Wrecks Privacy Automatically
Welcome, Protocol Attackers
Where Patches, Admin?
Worthless Privacy Attempt
Wrong Protocol, Admin
Won’t Protect Anything
Weak Privacy Attempt
Waste of Precious Attention
Wins Prying Award
Wired Past, Again
If I'm plugged into the router directly then i should be good because it eliminates the wifi handshake. So even though other devices on the wifi network could be affected, the node that is plugged in is safe against this?
Eh, yes, if you're plugged in you're good because it eliminates Wifi, period.
For that one client, anyway.
Funny enough, OpenBSD didn't impleemnt WPA(2) for a while. Instead, they were forcing their users to use IPsec and OpenSSH instead.
It would be nice if there was a rule which package repos and distros would adhere to. The rule would adapt, such as all the packages that have had a security issues, will always be required to be updated to the latest versions in the next release or sooner. As vulnerabilities are discovered, the list of packages would grow and hopefully would prevent some future attacks. Obviously it's not full proof but every little bit counts.
Apply the minimum necessary change to solve the problem.
This means cherry-picking the mainline patches where possible, or back-porting them where modification is required for them to apply (and work as intended) on older releases.
Especially with older versions it often isn't possible to update to a later upstream release because that depends on later versions of other packages. The dependencies can rapidly multiply to affect tens or even hundreds of packages.
Ubuntu patches were prepared and released within 4 hours of the security team being aware of the vulnerability. Same goes for Debian.
i looked here and i don't know where to pick up the patch also ran update manager in my ubuntu distro but no dice :(
Edit: also at that time this was at least subjectively significantly easier to setup than wpa_supplicant ;)
No, luckily implementations can be patched in a backwards-compatible manner.
1. Does not affect long-term credentials - certs, wifi passwords are still safe. Rather, confidentiality (secrecy) from client --> AP is affected, and in some cases packet forgery is possible (integrity).
2. Actually accomplishing this attack, for now, requires special and expensive hardware (med to high range SDR gear). Its also not that reliable outside of a lab environment.
3. Everything you care about _should_ be going over TLS, which mitigates all effects of this attack. If it isnt, fix it.
This is a great moment for you to fire up wireshark and audit the traffic going over your wireless link. If its not adequately protected and you care about it, fix it.
So - I agree with you that there's a barrier to entry, but it's not that big of a barrier.
Give: http://www.tp-link.com/us/products/details/cat-5520_TL-WN722... a shot. It will let you setup an AP in a VM easily, etc. It is our currently anointed dongle for VM hosted MiTM setups.
I haven't tested it, but it should work fine if they are just using mods to hostapd.
My suggestion is Alfa AWUS036NHA (The last three letters are important!) which has AR9271 chip with great support with the ath9k driver.
Thanks for the info... I am a little disappointed, the ones we have been using are all very reliable and worked out of the box, so we must have the Atheros ones. It also seems impossible to order the older version specifically.
Yep. I've been trying for a while, but couldn't be sure if what I was looking at was version 1. Especially since the Alfa alternative is bulkier.
Also, confirming, everyone even as recent as a few months back seems to have gotten a 1.0 or 1.1 version, but newer ones are now 2.0
https://imgur.com/a/jcnbE (one from my bag).
1) The paper claims confidentiality compromise allows the attacker to hijack a tcp connection: "allow an adversary to decrypt a TCP packet, learn the sequence number, and hijack the TCP stream to inject arbitrary data", this on all cases, even in the cases where it doesn't allow forgery (CCMP)
2) There's no such claim on the paper and according to the researcher, exploiting this on Android and Linux is trivial. Apparently also macOS. Did you see the video on their website?
3) There's no way for you to control this (apps, https stripping, for instance). Most importantly, there's no way for the average user to control this, short than using a VPN.
Again, as far as Wi-Fi security goes, seems pretty end-of-the-world to me. I don't think the huge attention this is getting is unwarranted.
The attack is a standard break exiting secure TCP connection and trick the target to re-create it to a host controlled by the attacker via arp poisoning or route hijacking. After that either convince target to accept a bogus cert or redirect to insecure connection. In the former case the issue is that browsers have way too many root CAs included in them and those CAs can issue certs for any domain; the issue in the second case is that users are not being paranoid enough.
The attack is the fact that someone couldn't do this you're describing on any WPA-2 protected Wi-Fi network before, and now they can.
Remember that the attack affects mostly client implementations therefore still needs proximity to victim(s), this makes most of the end-of-the-world type scenarios impractical (they even state these on their QA) and leaves exploitation to direct/APT-groups alone.
I don't think it's a lot of consolation saying something along the lines of "Wi-Fi security is broken, but it's not so bad because it's Wi-Fi"
Interesting book that can really burst your bubble on how bad things are and yet we are still here.
I'm not sure how that compares to the fact that WPA2 is completely insecure and trivial to decrypt on Android as "no, that is bad". Except maybe in a "well who trusts Wi-Fi security anyway?" to which I'd reply: "Actually, a lot of people. Including people on this thread".
I actually buy the argument that the RSA issue that affects YubiKey, that was announced today, is perhaps more important since it's harder to mitigate than using a VPN, but I don't know how bringing up silence in the wire makes this any less important.
Again, I haven't fully or detailedly read the book, so I could be wrong about that I guess.
This is probably the biggest misconception.
Many, many websites and APIs don't have HSTS enabled to force all connections to use TLS.
The author demonstrates using sslstrip to downgrade the connection of match.com to steal credentials.
How many people watch the green "secure" indicator in the URL bar to ensure it doesn't change mid-session?
How many thousands of apps don't even have such an indicator to observe?
How many millions of phones and APs will never get patched?
This is a severe vulnerability.
True. Yet another reason for us to push for it.
I have a chrome extension that sets the background-color of all form fields to red if the site it was served on or the ACTION attr are not https.
That said, pretty much every website in my day except for casual reading is pinned to TLS. APIs are the notable exception you pointed out, but otherwise HSTS is quite widely used, and especially effective with preload lists.
> How many thousands of apps dont have this indicator to observe?
Sure there will be some, but your standard Java apache client (along with 99% of the libraries used in Apps) dont have this kind of downgrade behaviour. If they expect validated https, they will fail without it.
> This is a severe vulnerability.
Yepp :D Not the end of the world. I think the main fallacy here is the implicit assumption that the link layer is secure. That has never really been the case and a broken wifi model is merely one more testament to this fact.
I honestly never considered that one...
No it doesn't. Watch the video. It creates a clone of your network and tricks the victim's software stack to connect into it.
Websites can automatically redirect to HTTPS if the client connects on http, but many websites don't redirect
You could create a separate "secure" profile and feel safe that all traffic is secured, while still being able to browse HTTP in another profile, for instance.
Credit to https://twitter.com/marcan42
if (win7 && smb && wpa2) then vulnerable to password theft
> As Hudson notes, the attacker would have to be on the same base station as the victim, which restricts any attack's impact somewhat.
If I understand it correctly then there has to be a connection already present for the attack to work?
So even if you patch all the devices in your house/company/whetever you can't be safe... Then yours aunt's un-patched android connects into your wifi and put all your network in risk. Or maybe that not-so-old security camera or SmartTV that will never be patched.
Time to move all those guys to an isolated vlan...
It might also be a good time to invest in NIC dongle manufacturers considering how many systems only ship with wifi.
Am I missing something?
However, homes, coffee shops, airports, etc -- those are places where someone could execute this attack successfully.
You're right that any transit router could intercept your connection and do these things but in practice those routers are "reasonably" well secured.
The AP will begin broadcasting deauth frames against the rogue AP as long as it sees it. There are probably some edge cases, but I would expect either the rogue and real APs will fight over the clients indefinitely (thwarting information leaks) or the attack code will trigger an exception and crash because what programmer would expect the client to immediately disconnect?
Setup some RPis with WiFi dongles and you can likely make a perimeter defense. IANAL and I do not know how the FCC would view this, although I imagine it is not worse than the Ruckus APs that can do this, yet still passed their certification.
This is not a substitute for patching devices, but it might help somewhat with devices that will never be patched.
This will fool your client (eg: your phone) to connect to it, before it reaches your access point, then forward packets to your AP (basic man in the middle).
On the other hand, I am wondering if it manages to correctly forward packages back to the AP if that has MAC filtering on...
Why did OpenBSD silently release a patch before the embargo?
OpenBSD was notified of the vulnerability on 15 July 2017, before CERT/CC was involved in the coordination. Quite quickly, Theo de Raadt replied and critiqued the tentative disclosure deadline: "In the open source world, if a person writes a diff and has to sit on it for a month, that is very discouraging". Note that I wrote and included a suggested diff for OpenBSD already, and that at the time the tentative disclosure deadline was around the end of August. As a compromise, I allowed them to silently patch the vulnerability. In hindsight this was a bad decision, since others might rediscover the vulnerability by inspecting their silent patch. To avoid this problem in the future, OpenBSD will now receive vulnerability notifications closer to the end of an embargo.
It's precisely the correct word. Prisoner's dilemma are simple, mathematically. This was one. OpenBSD defected. The joke's on the security researcher, though, since this doesn't appear to have been their first time .
Robert Axelrod outlined, in his 1984 classic The Evolution of Cooperation  four requirements for a successful iterative prisoner's dilemma strategy. One is retaliating. Security researchers are letting OpenBSD play an iterating game as if it's an N=1, i.e. they're not retaliating. Given the community is playing "always cooperate," OpenBSD's best move is actually "always defect".
 https://lwn.net/Articles/726585/ thank you 0x0 [a]
 https://lwn.net/Articles/726580/ thank you 0x0 [a]
Real life is messier than any model.
I'm mostly here just to correct misstatements of facts. You're welcome to your own interpretation, game theory optimization, etc.
You also can not guarantee me that no one who gets this information early is not working for a bad actor.
I was informed on July 15.
The first embargo period was already quite long, until end of August. Then CERT got involved, and the embargo was extended until today.
You can connect the dots.
I doubt that I knew something the NSA/CIA weren't aware of.
There's only a few courses of actions. One is to sit quietly and let everyone eventually do the solution. And that doesn't work. No fire under peoples' asses, and the work is delayed.
The other, is to release it promptly. Then, at least we can decide to triage by turning down X service (even if wifi), requiring another factor like tunnel-login or what have you.
But truthfully, defect in a Prisoners Game played out here was the best choice. The rest of the community is "agree".
If you don’t limit mitigation to "a config setting" (and why would you?!), a patch/new version is the best mitigation you can get.
I've been doing that for years and recommend others do so as well.
The rise of HTTPS nearly everywhere helps mitigate things a bit. This same type of exploit 5 years ago would wreak havoc exploited at the local Starbucks WiFI.
Think about it in a different way:
What if a vulnerability was discovered in TLS and FOSS implementations patched it, but there is an embargo for supposedly protecting some banking software? What if NSA/CIA/other agencies find out about it (they would know immediately) and use it to target users/activists?
But pretending as though co-ordination of any kind is somehow bad (and then resorting to emotional arguments and so on) is pretty reckless.
> even users of non-proprietary projects
Actually many FOSS projects get only notified on the disclosure date.
Hiding the vulnerability for such a long time makes more harm good. The vulnerability can potentially be exploited by security agencies that necessarily know about them and could also be leaked to a bad actor by an employee of one the vendors.
Hopefully WPA2 isn't that important, but potentially security sensitive users trusted something that was known by some to be vulnerable for 3 months! Bad actors could have used it against them.
The embargo resulted in potentially bad actors knowing about the issue, but not vulnerable users.
Do you seriously expect the other billions of people on the planet to be that great too?
No. I also don't expect them to choose device based on security. That is very bad as vendors won't care about patching their older devices (look at Android devices, home routers...) and vendors won't care about patching their flagship devices fast as they have the possibility to request very long embargos.
Making compromises for those vendors and giving more time for security agencies and other bad actors to silently exploit the vulnerabilities (where FOSS projects would have made patches for users that care) is not the way to go. That philosophy actually makes everybody less safe.
If you don't agree with an embargo and decide to break it, that's on you. But the consequence is that you shouldn't be surprised if next time you're informed later, or not at all. What OpenBSD proponents and developers are doing right now, is damage control. It may work this time, it may work next time, but it won't keep working every time so pick your fights right. It isn't the first debacle OpenBSD has with full disclosure either (hint: OpenSSH).
There are also millions upon millions of devices which won't get patched. Given the vulnerability is apparently the most vulnerable on Linux and hence Android, do you think all the smartphones running Android 4.3, 4.4, 5.0, and 6.0 will be patched ?
0 - https://github.com/openbsd/src/commit/2e40dd69ac29d6a858309b...
1 - https://github.com/openbsd/src/commit/cc66e8f557d6f3d4dea5ea...
The patch has obviously an explicit description:
"State transition errors could cause reinstallation of old WPA keys."
It's true, however, that anybody who analyzes the diffs would eventually figure that out, as Theo de Raadt argued.
My conclusion is also that the real error was even wanting to give the details to him at that moment, as there's apparently a history of him not respecting embargoes.
The whole thing is a shit show and really I'm rather more behind OpenBSD's approach.
Edit just to expand on this as someone deleted a post ....
It's slightly more complicated than the prisoner's dilemma. The prisoner's dilemma doesn't account for a large facet of the problem which is being discussed here. If all the good parties participate and coordinate then we're better off. The problem is there are outlying circumstances which means that not everyone will be included:
1. If someone kicks someone out (OpenBSD) on political whim playing CYA, they no longer benefit.
2. If a party is not let in, they no longer benefit.
3. If someone is unaware of it, they don't benefit.
This turns it into a security monopoly where the big vendors get exclusive rights to embargo and exclude smaller vendors and control the disclosure process on their own schedule.
The first thing the people outside of the club find is they wake up on Monday morning and have to clear up a shitstorm of monumental proportions with less resources than the monopolised vendors who've had time to deal with it.
Then there's the assumption that the monopolised vendors are trustworthy which is 100% impossible to validate and therefore invalid.
No bullshit please - you guys do a wonderful job of avoiding it and stamping on it when it does turn up. Keep up the good work :)
> This turns it into a security monopoly where the big vendors get exclusive rights to embargo and exclude smaller vendors and control the disclosure process on their own schedule.
Not necessarily. It turns into a monopoly of those who can show themselves to be credible partners. This exhibits incumbency bias which in social context we call track record. It's not nearly as exclusionary as you're making it out to be.
> Then there's the assumption that the monopolised vendors are trustworthy which is 100% impossible to validate and therefore invalid
This is common in trust problems. You don't need to be 100% sure everyone you're dealing with is trustworthy to work with them because we don't live in a single-iteration game. Again, iterations of retaliation and forgiveness remove the need to have 100% certainty about a player's intentions.
No one is credible here. The very nature of a closed agreement of secrecy between arbitrary parties is the opposite of credibility.
Sure, but eventually you get called out on it in a public forum, like this one, and people stop giving you goodies going forward. I would consider it acceptable practice to, when considering dealing with OpenBSD (or people who are close to them), (a) withhold vulnerabilities until after the embargo date or (b) refuse to give any information unless they sign a binding non-disclosure agreement committing them to the deadline under pain of penalty. (The latter is an option because it appears, in this case, they broke the spirit if not letter of the agreement. The solution to that problem is legalese.)
I didn't break any agreement. I agreed with Mathy on what to do, and that's what I did.
The fact that Mathy decided to get CERT involved and subsequently had to extend the embargo has nothing to do with me.
If Mathy was concerned, why did he wait to notify CERT? Should that not have been the first priority?
Which doesn't make a difference if OpenBSD still gets their patch out at the same time as everyone else. Unlike other vendors, it doesn't take OpenBSD four months to go from vulnerability notification to patch release, if you look at previous disclosure timelines they typically have a patch out in days.
TL; DR OpenBSD acted rationally if they'd prefer to go it alone, which seems to be their culture. To their credit, it's worked pretty well so far. But you can't have your cake and eat it too. If they prefer a mad scramble after public disclosure, they'll get it. But they shouldn't get early notice from responsible researchers.
I don't believe that embargo is healthy or responsible! If anything its a monopolising factor.
This really isn't true (because that kind of information is protected by TLS) and the article is highly disingenuous to not say so.
Nobody has trusted WiFi encryption as protection for sensitive information for more than a decade.
As much as this is a scare tactic to get people to demand vendor patches, it's been true for https for a while.
Browsers don't have any trick (that I know of) to enforce https on first connection. HSTS is defeated by simply rejecting connections to https - the user will retry the site from different devices and destroy their hsts cache in order to reach the site. Assuming the site used hsts.
I'm not certain if uninstalling a browser clears the cache (do uninstalled browsers retain their profiles?), but preloaded sites would not be affected - they're included in the browser binary. Either way, let's not act like there's a massive hole in HSTS because there's a possibility that users might go as far as reinstalling their browser to visit a not-preloaded HSTS-enabled site that's being targeted.
"You wouldn't HSTS the whole internet, would you?"
Google: "Hold my beer..."
Right now it's mostly unimportant new domains, but it's a start, and they could convince other domain registrars to follow-suit.