+1 for a good read, really enjoyed his writing style.
For those unfamiliar, Matthew Green is a cryptography researcher and professor at Johns Hopkins.
Edit: TIL John's' Hopkins ty /u/dEnigma
While I'm sure this can't take much of the blame, it sure strikes a chord. The IEEE standards process seems insanely archaic and broken in the open-source era.
This really does highlight the absolute disaster zone that the Android handset market has become as far as updates are concerned. I'm sure the Pixels will get a fix relatively quickly but almost every other Android user is going to be left in security limbo.
In general most bigger manufacturers have been somewhat decent in updating their flagship devices. With a Sony flagship from the last 18 months for example, you usually won't run more than two months behind on security updates. Samsung is similar if I remember correctly. Hopefully a big exploit like this will be enough of a kick in the butt to get manufacturers releasing security updates faster.
I totally agree with your hope that this will kick both the manufacturers and Google in the butt enough to get something done about this. I don't like our chances though!
There is a problem with handset abandonment, but this is true across all vendors, and it does not underline sequence7's claim that this is solely an Android problem.
For example, under the Consumer Rights Act 2015 in the UK the product must last for as long as a reasonable person would expect it to, and Apple's interpretation of that is five/six years (see https://www.apple.com/uk/legal/statutory-warranty/).
The Consumer Rights Act 2015 also covers unfair terms in sales contracts (and at least in Scotland EULAs are part of the sales contract, per Beta Computers (Europe) Ltd v Adobe Systems (Europe) Ltd; I don't know the status elsewhere in the UK), and it's quite likely you could just go through most of the possible contractual outs and argue they are unfair terms.
And they are more expensive than many computers you can buy on the market. 2 years support on a device that can cost $500+ isn't acceptable.
We are 4-5 years into the period where people have had sub $300 choices, so there is an alternative to spending $500+ on a device that comes with 2 years of support. Maybe not a fantastic alternative, but the $300-$500 extra that people choose to spend says something about what they care about.
You don't know that. Apple could very well release an iOS 10 update for this.
But in both cases the stated policy is that the devices will not receive any more updates, either feature or security.
Hopefully Android Orea 8 with Project Treble will stop this ridiculous trend for the rest of us. Together with the smartphone market being saturated (budget phones of 200 EUR are very decent these days), we may end up with long term support on older yet still decent devices.
It's just a word that means you paid more for having all the bells and whistles that the OEM could offer at the time, instead of going for the next-best model or such.
Fortunately, whether you can afford the best of the best with all the optional extras or a cheaper second-tier model doesn't affect the security of the device. And it shouldn't, because if you can't afford a "flagship" device, doesn't mean you can afford to get hacked either.
Unfortunately, while the security-update frequency ought to be comparable, it turns out that it's mainly comparably bad :-/
That's still truly terrible compared to Apple's legacy device support. iOS 11 and future patches still support even the iPhone 5s, a phone from 2013.
Although websites or apps may use HTTPS as an additional layer of protection, we warn that this extra protection can (still) be bypassed in a worrying number of situations. For example, HTTPS was previously bypassed in non-browser software, in Apple's iOS and OS X, in Android apps, in Android apps again, in banking apps, and even in VPN apps.
Either way, this disclosed vulnerability only involves link layer man-in-the-middle in order to collect traffic. Active manipulation of traffic (Required for TLS intercept) is more complicated.
: "Finally, although an unpatched client can still connect to a patched AP, and vice versa, both the client and AP must be patched to defend against all attacks!"
: "you can try to mitigate attacks against routers and access points by disabling client functionality (which is for example used in repeater modes) and disabling 802.11r (fast roaming)."
Well, hopefully this means no kernel patch will be needed.
You get updates every week.
Your advice is valid, but it’s important to not have a false sense of security.
I found it interesting that, in his article, he said:
"With our novel attack technique, it is now trivial to exploit implementations that only accept encrypted retransmissions of message 3 of the 4-way handshake. In particular this means that attacking macOS and OpenBSD is significantly easier than discussed in the paper"
but elsewhere it said recent versions of OS X and iOS are not impacted. I wonder if the "safe" OSes are only vulnerable to the blocking/replay but not the decryption of data?
My UniFi AP-PROs show up today so I'll make sure to update them first thing.
Also, I'm having a bit of a hard time understanding the attack. It sounds like he forces them to connect to his AP, performs the attack, then allows them to connect to the intended network with the zeroed key, THEN is able to sniff that client's traffic because he knows their key? If I understand correctly, this means he cannot sniff the whole network's traffic, only the traffic between the attacked client and the AP? This makes me wonder about the meaning of a pre-shared key, but I'm guessing the PSK is only used to setup the relationship between client and AP, and then after the initial connection/pairing the pre-shared key is no longer used...
He forces them to connect to his own AP and forwards all traffic to the destination so that the client is unaware it has been redirected.
He then forces the client to re-install the key which (on anything that is derived from wpa_supplicant e.g. Linux, Android, etc) the client has blanked out after first use, so the key it reinstalls is now all zero bytes.
He can continue to forward the traffic to the destination so that the client gets responses, but now he can decrypt all of the traffic too.
For clients that re-install the correct key (which the account does not recover in any way) the attacker has to rely on snooping enough encrypted data in order to perform a birthday attack as the key re-installation also resets the frame counters which leads to nonce-reuse which is a problem in ciphers like AES-GCM.
If you choose not to use public WiFi because you can't "trust" it, then you now need to stop using your private WiFi too (until your systems get appropriate patches).
Using a VPN is the best way to mitigate this until your device is patched, assuming you trust your VPN provider or run your own VPN.
Edit: Actually, even if you don't trust your VPN provider, you'll be protected against this attack (KRACK), given their client is implemented properly.
Unfortunately this is a big part of trusting your VPN provider. It’s shocking how bad the situation is, especially it seems on those marketed via Android apps. 
I'm sure, given the size of that list, that they tested some of the biggest players on the VPN space. I think it'd be good to know which apps were tested and didn't show any issues, especially in light of Krack and the Android bug on wpa_supplicant.
With critical bugs like these, it's certain Google will require recent devices that have enough affected users to be updated ASAP. Expect an update within in a few weeks.
Also you could install a better version of Android on your phone rather than an outdated vendor version. That will probably fix more security related issues than just this one :)
But it's very difficult to ensure that all the communications your device is making (background services, vendor apps...) go through that channel.
Reality is that DNS remains and will continue to remain a giant hole in TLS.
Relying on the efforts of unpaid volunteers doing their best to hack together binary blobs is also not the best idea...
Not all devices are supported by major ROM distributors, nor is the support guaranteed to be endless or current... (even some devices as major as the Galaxy S6 for example)
1. Beyond difficulty of porting blobs, you might well also simply get your updates from a custom ROM faster than you'll get them from the manufacturer, even if it's still supported. That in itself is an advantage.
2. Backporting updates to third party components can be simple (assuming a stable ABI/API); the easiest case is probably that of just dropping binaries from a similar phone that did get updated into a zip file and then flashing it. Look at busybox installers, for example; all you need is a version compiled for your hardware. Java components can sometimes be changed as well (see xposed). This works on desktop systems as well, sometimes: I've been able to 'fix' older games into working just by dropping a newer version of a dll into the game directory (directx, openal, etc))
3. Maybe the company is just stupid. Motorola (or is it Verizon?) has tried Marshmallow for the Moto E2 in Europe, but not in the US. I'd expand on this but I'm on mobile and I'm lazy.
In practice, everything of value should be going over TLS. If you're worried you should be using a VPN on untrusted networks. This attack, if I'm reading it right, doesn't do anything someone on your wlan or lan can't do right now via ARP poisoning and other attacks. So being on that work connection or restaurant wifi is almost the same risk level of this attack.
In a case that it is, its curious how you would inject data into a smb stream and not fail checksums from client-side chechking. Maybe its trivial to deal with this, not sure.
If the WPA2 protected wifi network is using AES, which is the most common in my experience, then they won't be able to inject any data. From the Krack website:
If the victim uses either the WPA-TKIP or GCMP encryption protocol, instead of AES-CCMP, the impact is especially catastrophic. Against these encryption protocols, nonce reuse enables an adversary to not only decrypt, but also to forge and inject packets.
If this is an actual in the wild exploitable issue, there will be patches very quickly for handsets in the support period, as quickly as there is for iOS. This has been the case repeatedly before as well.
What a weird post in general. Maybe wait to complain about this a month down the line or so? Instead it's just effectively noisy rhetoric.
The use of the word hopelessly was probably unnecessarily dramatic I agree but I'll leave it there so your comment makes sense.
Remember, 'S' in IoT is for Security.
One of the best quotes I've heard in a while.
From the source: In general though, you can try to mitigate attacks against routers and access points by disabling client functionality (which is for example used in repeater modes) and disabling 802.11r (fast roaming).
For ordinary home users, your priority should be updating clients such as laptops and smartphones.
Are manufacturers like linksys, d-link issuing patches now or will it be enough to have windows/os x/iOS/android updates enabled? Or do I need both?
Can anyone explain the timeline of releasing such significant security findings? Why is it disclosed to the public 1/2 year after submitting to review? I'd guess the (publicly funded) research behind it is a lot older than that.
e.g. Dan Kaminsky's discovery of DNS cache poisoning had a 5 month responsible disclosure embargo.
From my understanding of research at public institutions there is a long period of time and steps between finding something interesting and submitting a paper for review.
Why not disclose the vulnerability first to concerned parties and then write up a fancy research paper? Why the other way round?
Only two explanations I could come up with: Either there must be a very short time frame between identification of the vulnerability and writing of the paper or there was further research needed. Or....I don't know
It's 3 month. It's a reasonable delay if you have to alert lots of manufacturer and they need time to roll out critical patches to lots of devices.
Please stop confusing slowness with an intent to delay or ignore.
I'm not convinced it was a bad decision. Why would you want to leave your users vulnerable? It's possible that this has been exploited in the wild.
Up until now, there were no indications that this was being exploited publicly. After a flaw like this gets known (whether through a coordinated disclosure or through OpenBSD's early patch) you can be assured people will be exploiting this.
Do you both stay silent and take the minor risk of your users being vulnerable for a short time longer whilst patching and disclosure is being coordinated with all parties (-1/-1), or do you "betray party B" but get your own users secured as soon as possible (-3, 0).
I think coordination makes more sense in a flaw as big as this.
> To avoid this problem in the future, OpenBSD will now receive vulnerability notifications closer to the end of an embargo.
i.e., _explicitly signalling_ that this researcher intends to play "defect" with OpenBSD in future rounds, should future rounds occur.
In this case, it was not a short time.
Is this attack likely to generate log evidence on affected APs in their default configuration, or is it so far down the stack that no evidence is generated and nobody could refute this claim?
Anyone got any suggestions for options?
You could just ... buy an iPhone and get timely security updates for years.
EDIT: Downvote if you want, but if iOS 11 contains this security fix exclusively and not iOS 10, then an iPhone 5s bought on 20 September 2013 is going to get this fix. If Apple release an iOS 10 update and you bought an iPhone 5 on 21 September 2012 you're covered too.
Apps should be self-contained in their bundles, and may
not read or write data outside the designated container
area, nor may they download, install, or execute code,
including other apps.
And section 2.5.6:
Apps that browse the web must use the appropriate WebKit
As far as I can tell, you can install a web rendering engine that is not the built-in WebKit, as long as you only use it for HTML/JS that come with your app. At that point the JIT caveat applies.
But regardless, with your own device, you can run whatever code you want on it.
> (if it is not against the app store guidelines)
But it has very few kernel security patches: https://cve.lineageos.org/android_kernel_motorola_msm8610
I'm using it successfully with LineageOS 14.1 (Android 7.1.2).
It attempts to detect root or modifications to the ROM by malicious software.
Certain newer devices have secure boot attestation that may cause SafetyNet to fail unless spoofed to be a different device which does not have such attestation.
Also, neither root not an unlocked bootloader is required to make "proper backups". Some data actually can't be backed up, and for some data there is no point in making a backup. If the goal is to be able to restore the system to a specific, known state, a bit-for-bit image backup of the entire filesystem is just one way to accomplish the task.
So in other words "yes, that is a requirement that will eventually be on all android phones"? Am I misunderstanding something? Older phones being an exception does me little good going forward.
> Also, neither root not an unlocked bootloader is required to make "proper backups". Some data actually can't be backed up, and for some data there is no point in making a backup. If the goal is to be able to restore the system to a specific, known state, a bit-for-bit image backup of the entire filesystem is just one way to accomplish the task.
The last time I tried adb backup and restore, it was a mess. Multiple apps like Skype had no data. And authenticator explicitly opts out of being backed up.
Titanium backup, on the other hand, works perfectly.
Ideally I would just have a rooted phone, but then safetynet complains, and I can't even use Netflix and pokemon. As an alternative I could accept an unrooted but unlocked phone, and root it only when making and/or restoring backups. But having neither is a big hassle.
To date it means that it's very possible to bypass any protections put on this though - I believe this may even be possible without spoofing the device in this way, but in any case, Magisk works on any device available today.
It's really not the same as being free of annoying and unhelpful restrictions.
I'm actually persuaded that I don't need terminal root access on a device (except for system debugging), but rather a firmware signed with my own release keys, and apps that need privileged access baked in.
Also fairly actively developed and supports a wide range of devices.