Hacker News new | past | comments | ask | show | jobs | submit login
Why Android SSL was downgraded from AES256-SHA to RC4-MD5 in late 2010 (op-co.de)
365 points by ge0rg on Oct 14, 2013 | hide | past | favorite | 86 comments



There's interesting technical content here, but it suffers from its alarmist tone.

The MD5 hash function is broken, that is true. However, TLS doesn't use MD5 in its raw form; it uses variants of HMAC-MD5, which applies the hash function twice, with two different padding constants with high Hamming distances (put differently, it tries to synthesize two distinct hash functions, MD5-IPAD and MD5-OPAD, and apply them both). Nobody would recommend HMAC-MD5 for use in a new system, but it has not been broken.

RC4 is horribly broken, and is horribly broken in ways that are meaningful to TLS. But the magnitude of RC4's brokenness wasn't appreciated until last year, and up until then, RC4 was a common recommendation for resolving both the SSL3/TLS1.0 BEAST attack and the TLS "Lucky 13" M-t-E attack. That's because RC4 is the only widely-supported stream cipher in TLS. Moreover, RC4 was considered the most computationally efficient way to get TLS deployed, which 5-6 years ago might have been make-or-break for some TLS deployments.

You should worry about RC4 in TLS --- but not that much: the attack is noisy and extremely time consuming. You should not be alarmed by MD5 in TLS, although getting rid of it is one of many good reasons to drive adoption of TLS 1.2.


There is another technical angle; RC4 is usually quite a lot less CPU intensive than the alternatives available. Not using RC4 can easily mean stuttering video playback, greatly diminished battery life, and even lock ups. Very few users are open to accepting that issues like that are "better" for them.

Many RC4 deprecation efforts have faced rollback in the face of issues like this; especially on hard to fix embedded devices (think TVs, Cars and phones) with comparatively weak CPUs.


There are two solutions: use hardware with the AES-NI instruction set, which makes AES blazing fast, or alternatively use a better stream cipher like Salsa20. On my machine, which has an Intel i5-3570k, Salsa20 is about 25% faster (edit: than RC4)

Unfortunately, neither solution is easy: only the very latest chips have AES-NI instructions, and not many clients support Salsa20 yet (OpenSSL does not, for example, and it powers a lot of SSL stuff).


Does any TLS stack support Salsa20? I know Adam Langley has a draft for ChaCha+Poly1305, but that's not Salsa20.

Either way, I don't think Salsa20 is a realistic suggestion for improving TLS performance.


Yes. GnuTLS does: http://www.gnutls.org/manual/gnutls.html#Encryption-algorith...

Anyway, TLS is in a tough spot. It's such a widely adopted standard, with so many implementations, that making radical changes is exceptionally difficult. AES-NI leaves the standard mostly alone but requires new(ish) hardware, but on the other hand, implementing newer, faster primitives (like Salsa20) requires essentially turning the massive boat that is TLS.

There are no easy solutions, at least as far as I can see.


That's true of all new ciphersuite proposals, isn't it? The Salsa20+Poly1305 proposal just replaces AES, CTR, and GHASH with Salsa20 and Poly1305.

The problem is getting the installed base up to TLS 1.2.


Oh, yes - definitely. I just picked Salsa20 as an example because I already had benchmark data for my machine, and I am familiar with it.

But even TLS 1.2 won't help because 1.2 doesn't include ciphers that are screaming-fast without hardware-acceleration. AES-GCM is faster than AES-128-CBC/HMAC-SHA1, but Salsa20-256/HMAC-SHA1 is still twice as fast on my machine. Now if the AES-NI instruction set is available, then AES-GCM handily beats everything by a large margin. (Of course, using hardware acceleration, AES-128-CBC/HMAC-SHA1 is marginally faster than Salsa20-256/HMAC-SHA1, again on my machine.)

The ultimate point is that, without the AES-NI instruction set, new ciphers are just about the only way to get really good TLS performance.


Does AES-GCM with AES-NI and PCLMULQDQ beat Salsa20+Poly1305 with lots of sessions? I know it's got excellent cycles/byte for a single session, but TLS implementations also need agility.


I'm afraid you've exhausted the limits of my precomputed benchmarks. :)

I don't know the answer offhand, but I would suspect that hardware-accelerated AES-GCM would win. It certainly does in single-threaded, "one-session"-esque tests, and the margin of its victory makes me think that hardware-accelerated GCM would be hard to beat by anything.

On my machine, a single thread/core running nothing but AES-GCM can encrypt/decrypt 8192 byte blocks of data at 1.32 GiB/s (this is using OpenSSL's benchmarking feature). Yes, that's gigabytes, not gigabits. It's literally faster than IO for my SSD. (Salsa20, without a MAC, can do the same at about 0.64 GiB/s.)

When I told OpenSSL to use four threads in parallel, it ranked at 5.01 GiB/s, which is absolutely crazy.

That said, beyond a general leaning towards AES-GCM (simply because it is so fast with hardware acceleration), I don't have any hard data on which would be the victor. But I may just construct some benchmarks to test that out, because it's an interesting question.


(disclaimer, I'm one of the Salsa20 in TLS draft authors).

Note that the suggestion of using Salsa20 is to replace RC4 not only to get better performance, but because RC4 is broken (as you know).

Salsa20 (and ChaCha) can be implemented on constrained devices and reach RC4 like performance. On modern architectures the algorithms word based functionality better utilise the HW than RC4 and can reach better performance.

Yes, AES with HW-support such as AES-NI can provide really good performance too. But then we _only_ have AES (and DES/3DES). Do we want to reduce SSL, TLS to a single symmetric encryption primitive? And no stream ciphers?


There is at least an RFC out for a TLS stream cipher using Salsa20, if I'm remembering current events correctly. Of course publication of a spec will precede implementation in hardware and software by some years, I would imagine.



Google's Adam Langley has just proposed ChaCha20 with Poly1305 for TLS, and says the performance is pretty good (~5x faster than AES-GCM in software).

https://www.imperialviolet.org/2013/10/07/chacha20.html


Actually, RC4 is pretty memory hungry (with a state of 256 bytes) and performs a lot of read operations that hit main store in small devices.


Thank you for the insight. It is good to get some more details on how broken it is. I will add a clarification regarding MD5 to the article.

Sorry about my alarmist tone - from time to time I need to get rid of my conspiracy theories.


This is a truly impressive investigation work. Excellent and very usefull work. Regarding the tone, I would be alarmed too.


On the contrary, "It has not been broken" is exactly what I would expect a programmer to say.

If the security of an algorithm is weakened, then it's important to evaluate the use of the algorithm and make efforts to implement stronger security now. You should feel fortunate that you even get the time to move to something better before all hell breaks loose.

This is the same kind of thinking I hear daily when people say things like, "Just use bcrypt" without thinking about the consequences.

The tendency for programmers to think of security in a nihilistic way continues to boggle my mind. I don't think the article suffers from an alarmist tone. I think it's correct to look at something shitty and call it shit.


I have no idea what this comment is even trying to say. I have no idea what MD5 has to do with bcrypt, and I have no idea what "nihilism" has to do with the fact that HMAC-MD5 isn't broken. We didn't just "discover" that MD5 was weak; Paul Kocher knew it was weak when SSL 3.0 was standardized back in 1996, which is why the SSL 3.0 handshake PRF uses both SHA-1 and MD5.

Yours is the kind of comment anyone can write without knowing anything whatsoever about cryptography, so I'm wary of going into more detail.


Apologies. Perhaps I'm being a master of the obvious here, so I'll restate more simply:

When people try to implement security without actually thinking about what the system is doing, it creates weaknesses in the security, not due to algorithmic weaknesses, but because the organization and the engineering discipline for the future is compromised. Thus, while "just use bcrypt" or "just use HMAC-MD5" might work today, the organization doesn't have the mind to update it when it finally does break.

This is exactly what happened (and is still happening) today after MD5 was broken.


This is the same comment with fewer words, and while I appreciate the concision, it doesn't make any more sense to me.

Bcrypt isn't broken or even weakened.

HMAC-MD5 isn't broken.

HMAC-MD5 and bcrypt are unrelated.

Nobody is ignoring the problem of MD5; in fact, suspicion about MD5 animates the very first secure SSL specification we have, from almost 20 years ago. Nobody is saying "just use HMAC-MD5".


I think what he is saying is that many individuals and organizations will not learn the fundamentals behind why X is broken, they only learn "X is broken use Y instead."

They instead should learn that Y is also potentially broken in a given circumstance - and maybe that doesn't apply to my current situation but I need a review process to check that it still doesn't apply to me at a given time in the future.

For someone designing a cryptography application, this understanding should be very deep. I don't think it needs to be as deep for someone who is configuring their Apache server and just needs to know what ciphers to enable and which ones to prefer. In this case it is best to follow an industry best practice based on the type of data being sent over the wire and the compatibility/performance required by the clients/users. Then schedule an annual or quarterly review of those choices to make sure they don't go out of date and keep an eye on security bulletins in case one of them is severely broken.


What he's saying is these blanket statements "just use X" is what is broken. Sometime ago it was "just use md5" and we're still suffering through the fallout of that long after md5 has been shown to be broken. Now we're pointing everyone in another direction and at some point that will be broken too. His point is that we need to educate people on the reasons why one algorithm is better than another for certain security concerns rather than relying on blanket catch-all declarations.


And now I'd like to say for the third time that no, there was no "just use MD5" meme in cryptography or in software development, and if TLS is an illustration of anything, it's of not simply leaning on MD5. Once again: the TLS protocol itself is not vulnerable because of MD5, and it's not vulnerable because its designers and implementors both knew about and accounted for the weaknesses of MD5.

The author took the opposite lesson from TLS than the one that it actually demonstrates, and the commenter above is harping on that broken lesson.


As a computer scientist, it's a joy to discover when you're wrong about things. So I'm enjoying being on the wrong side of the discussion for once, because I'm learning lots.

Thank you for your replies tptacek, I've learned much from this discussion. If I could edit my top comment, I would.


:)


Has anyone said "just use MD5" to someone who wasn't about to use CRC32 instead?


Doubt about this exact case, but I've seen MD5 being (ab)used in a really weird ways, which I attributed to mindless "oh, I'll just use MD5 here, heard it's good for security!"

One particular case I remember was use of md5(md5(md5(unix_timestamp()))) to generate "secure" session tokens.


That scheme would be insecure even if it was SHA3(SHA2(SHA1(unix_timestamp()))


> This is the same kind of thinking I hear daily when people say things like, "Just use bcrypt" without thinking about the consequences.

Sorry to say, but "just use bcrypt" is currently the right three word statement that you can use if anybody is asking "I'd like to hash a password, and I don't want to learn all of crypto before I do." Bcrypt is currently among the algorithms that are hard to break if used correctly, deployed widely, has wide support in deployed languages and frameworks and it's fairly simple to use. There's little room for major fuckups here.

There are algorithms that are harder to break (scrypt) or an official standard (PBKDF2), but seriously, bcrypt is currently good enough. Sure, it's always better to read and learn, but sometimes people just have to get things done and I'd rather see them use bcrypt than sha1 or unsalted md5.


> The tendency for programmers to think of security in a nihilistic way continues to boggle my mind.

tptacek is appears to be too modest to say it himself, so I'll go ahead and point it out: He's not "just a programmer", he's a well-respected computer security and vulnerability researcher.

This isn't to say that you should ever simply take his word for stuff, but rather that you are on one hand preaching to the choir, and on the other that you are probably not considering practical effects on security design that he has to wrangle with all the time.

For instance, it's probably a bad idea to hop immediately from one weakened (not even broken) cryptosystem to The New Hotness just because flaws are uncovered, especially for those doing this without thinking of the consequences. For every theoretical security bug you may fix while doing the conversion, you may very well introduce two much practical security bugs.

Cargo cults are bad wherever they are encountered, even when the cult involves something as seemingly as innocuous as "Cryptosystem $FOO has been weakened, time to jump ship".


> This is the same kind of thinking I hear daily when people say things like, "Just use bcrypt" without thinking about the consequences

I'll say what everyone's thinking: What are the consequences?


The parent seems to be implying (taking other comments into account) that people "cargo cult"-ing on ideas like "just use bcrypt" might work now, but it will become a liability in future when bcrypt is weakened or broken (making it more difficult to get people to switch to the next standard practice).


That's odd, I don't think anyone is taking the advice to mean "use bcrypt for ever", I'd imagine that everyone understands that we use it because it's good enough for the foreseeable future.

An odd point for the GP to make.


Thanks for your insightful comments and representing the paradox of the internet: Alarmist tones are required to generate interest in a story, but taking the time to explain the cognitively challenging details is instantly off-putting in an article. The average internet users is becoming so stupid, even the simplified versions are TL;DR. I've come to rely on Hacker News comments in ways I used to rely on Slashdot stories. The SNR at Reddit is overwhelming.


Minor nit: note that a collision after the first MD5 application will result in an HMAC-MD5 collision. The dissimilarity between the two MD5 applications isn't for collision resistance. (The second MD5 application is still mandatory for preventing length extension attacks, and makes key recovery attacks more difficult, among other things.)

Rather, the increased collision resistance comes from the fact that the 64-byte keyed padding puts the MD5 context in a state unknown to the attacker before any of the attacker's data touches the MD5 state. As long as the HMAC key has at least 128 bits of entropy, all possible values of the 128-bit MD5 internal state are nearly equally likely. This makes it much more difficult for an attacker to produce collisions.


I'm sort of confused, because TLS 1.1/1.2 support across browsers looks to be quite poor at the moment. Especially when given a large number of IE visitors that will likely upgrade to IE11 (first version that will have TLS 1.1/1.2 enabled by default) sometime around the year 2016.

With that said, I was under the impression that sites will need to support TLS 1.0 for a good long while, and if that is indeed the case, would they not be better off using RC4? From my understanding, the RC4 attacks seemed less practical than attacks against the implementation of CBC mode in SSL 3.0 / TLS 1.0?


It's not that simple.

Yes, the installed base is going to keep TLS 1.0 and the legacy SSL block cipher construction in deployment for a long time.

Yes, smart people (among them AGL) have said that the RC4 attack is less practical than the M-t-E timing attack on the SSL CBC ciphers. (By the way, it would be great if we could start putting the blame on M-t-E instead of CBC; the vulnerability isn't in CBC per se. CBC is fine; M-t-E is proven not to be.)

But:

* The timing attack also has remediations (see AGL's famous NSS patch) which don't change the protocol.

* The timing attack is fundamentally unlikely to get more powerful; it's exploiting a very simple, well-understood problem.

* Work on exploiting the RC4 attack is in its infancy, and there are multiple ways the attack could get both fundamentally more powerful and more efficiently implemented.

* There are no software-only fixes to the RC4 problem that don't break the protocol; RC4 is fundamentally and irrevocably broken.


Interestingly, Google Chrome for Android is one of the few browsers I've seen that supports TLS 1.2 with AES-GCM


But moving from AES and SHA to RC4 and MD5 should have made a few bells go if simply because it is bad engineering and shows lack of knowledge.

If we want to move away from MD5 and RC4 we first must start deprecating their usage wherever we can. Removing suites in SSL/TLS that uses them is a pretty simple step. Moving _from_ good suites _to_ these suites is the totally wrong way to go.


What a surprise, tptacek defending Google no matter what... /s


I believe tptacek's comment to be historically accurate.

Disclosure: I work for Microsoft.


Learn a bit about cryptography and you too can simultaneously be judged an Apple fanboy, a Google defender, and an NSA apologist.


Close to this subject there was a good invited talk entitled "Why does the web still run on RC4?" by Adam Langley at CRYPTO this year. I can't find a video online however someone from the Bristol crypto group wrote a small report of his talk here: http://bristolcrypto.blogspot.fr/2013/08/why-does-web-still-....


This beautifully illustrates the power of open source. One guy was worried enough about security to start checking the crypto source, and was able to alert the community. I hope this leads to a more secure platform.


Not really. Basically of this info was transmitted in the clear and easily visible in packet captures.

Admittedly it is quite a bit more convenient to look back in source code history rather than dig up and test old versions of the compiled code directly.


It was a downgrade during the Android life cycle. I don't want to be "that guy" but someone had to have a good reason to roll back to RC4-MD5 if they were using SHA before.

But hey, you can open an issue about it.


I asked my local SSL expert, and he mentioned: the list the client sends is just a preference list; the server can choose what it wants.

For example, nginx by default[1] specifies an OpenSSL cipher list of HIGH:!aNULL:!MD5, which you can examine by running

$ openssl ciphers 'HIGH:!aNULL:!MD5'

You'll see neither RC4 nor MD5 in that list. (You will if you run a plain "openssl ciphers", so you can see openssl knows about them but the config turns them off.)

(I'm an SSL newbie, please correct any mistakes I've made in the above.)

[1] http://wiki.nginx.org/HttpSslModule#ssl_ciphers


You are right, the final choice of the algorithm is with the server. I am not sure though if it is possible to give other ciphers a higher priority on the server without completely disabling RC4 (which is still better than no encryption / no connection).

Edit: effhaa mentioned http://httpd.apache.org/docs/current/mod/mod_ssl.html#sslhon... for apache in another post.


Nginx has an equivalent preference, ssl_prefer_server_ciphers on. (Scroll down a bit on evmar's link.)


Why do you have to fix it in the apps? Iirc you just could specify a different order on the server side and enable "honor cipher order", so the servers preference is used? http://httpd.apache.org/docs/current/mod/mod_ssl.html#sslhon...

Not sure there, though.


Afaik, as long as a weak cipher is enabled on both client and server, a MITM attacker can force it to be used. It involves manipulating the handshake to tell both parties the other one doesn't support any better cipher.


Eh, no. Maybe in SSLv2, but the first thing TLS encrypts is a hash of the entire handshake. Modifying the cipher list would change those hashes into something different.

Unless you have a client which will happily disable a cipher and try again when encountering an error. But if you do that, you don't deserve any security.


Do you want to make the gamble that every server has "honor cipher order" on and configured in a saner way?


Yes, you might have a point, depending on the application - but I guess most apps do have their own set of "backend servers", so modifying the servers or the client should be pretty much the same effort. Modifying the server doesnt need a client side update, though.


Weak cyphers should be disabled on the server entirely, not just re-ordered.


Knee-jerk disabling of RC4 because it's "weak" would almost certainly reduce the security of the Internet, because you can't simply evaluate TLS ciphersuites based on the strength of their core cipher; there are lots of deployed TLS clients that can't do block cipher crypto securely right now.


Examples please?


RC4 was first used as a mitigation for the BEAST blockwise-adaptive attack on CBC-with-chained-IVs from SSL 3.0 and TLS 1.0, and then again as a mitigation for the "Lucky 13" timing-based CBC padding oracle that remains a problem in TLS 1.2 when block ciphersuites are used.


Ok, thank you. I misread - I thought you meant that there were clients which weren't capable of handling block cipher suites (e.g. for performance reasons). You made me look into "Lucky 13" though, so at least I learned something!


I'm usually the first to rail against NSA shenanigans but I also believe you shouldn't ascribe to malice what can more easily be explained by stupidity.


And you probably shouldn't ascribe to stupidity what can more easily be explained by "not at all stupid or malicious".


I'm sorry, but I'm only a passing student of cryptography, and I've known that both RC4 and MD5 have been broken for quite some time now.

I don't remember the timeline, but if you're implementing code for algorithms and you decide to use the defaults "just because", you're being negligent - that is to say being pretty damn stupid.


Once again, with feeling: the fact that an algorithm is "broken" does not mean that a cryptosystem reliant on that algorithm is necessarily broken. In this particular case, the MD5 breakage is not currently relevant to TLS, and it might be decades before it ever is. And, while nobody particularly liked RC4, it was deployed to mitigate an even worse vulnerability in the MtE CBC construction in TLS.

Cryptosystems exist in strata: environments, algorithms, constructions, protocols, applications. A careful cryptosystem is designed so that a flaw in one stratum doesn't immediately destroy the entire cryptosystem. Not only did TLS largely succeed in that goal, but it succeeded in part due to the availability of RC4.

So: no. No, no, no.


I get your point about the security of the system as a whole: my point isn't that the algorithms are on the list, just that they're at the top of the list.

RC4 may have helped TLS to succeed, but it's 2013 - surely there's something that is robust enough to be used instead by now?

Of course the simple explanation could just be for performance reasons.


No, the simple explanation is backwards compatibility. There was a client-side mitigation to the MtE vulnerability, but it broke some tiny fraction of servers in the wild so it never made it to the stable release of NSS.


I think the NSA gave up the right to the "stupidity not malice" defence when they started planting HUMINT agents in large companies to deliberately insert backdoors into major companies' systems. Malice is there and is obvious.


  "The change from the strong OpenSSL cipher list to a hardcoded one starting 
  with weak ciphers is either a sign of horrible ignorance, security incompetence 
  or a clever disguise for an NSA-influenced manipulation - you decide!"
Survey says: Short-sightedness. Not really ignorance or incompetence (although that may be arguable), but it's certainly not "NSA-influenced manipulation". That's the sort of thing they reserve for countries, not consumers. For consumers, they rely on undisclosed 0-days with the severe ones reserved for high priority targets.

It's far more economical, considering the scales of this vacuum, to simply rely on service providers freely handing over data on their customers rather than breaking crypto.

Side note: The "OMG NSA!!" hyperbole is starting to fray at my nerves. Not everything is a conspiracy. It doesn't need to be when willing participants are holding the keys to the castle in the first place.

Relevant: http://xkcd.com/538/


The N.S.A.'s Sigint Enabling Project is a $250 million-a-year program that works with Internet companies to weaken privacy by inserting back doors into encryption products.

From http://www.nytimes.com/interactive/2013/09/05/us/documents-r...


I could have sworn I read that as "that works with Internet companies". Like I said...


Like it or not, "OMG NSA" is now part of the lexicon in a post-Snowden world. It's how every piece of technology developed Stateside is going to be perceived from here on out.


I bet this is stupidity, not NSA.


I find it hard believing that stupidity explains a deliberate change by Google engineers.


If only there was some possibility of there been a third option instead of just stupidity or maliciousness....


And I'll just finish that thought since there are real engineers involved who probably had good intentions and skills: (as the article stated) the Google engineers were trying to improve compatibility and also seemed to follow the path of what other platforms (Java) had done in he past.

Code reviews happen every day in the industry, and often times it's amazing how many flaws and defects are found, but often internally and not exposed for the world to see and speculate on. The nature of open source is that this is all out in the open, and that's fine. It's also good that Google is actively paying bounties on discovering/fixing these types of bugs in a variety of major open source projects.


That's not the third option he was thinking of.


How about both?

An engineer on the payroll of NSA, and then stupidity on the part of whoever signed off on the commit?


How about neither? The designers of the SSL3-era ciphersuites knew that MD5 was shady but had few better alternatives because those ciphersuites predate even the SSL3 standard itself and thus readily available SHA1, so they used constructions that remain secure 20 years later even with broken hash cores. And subsequent designers and implementors have swapped RC4 in and out of TLS as needed to mitigate performance problems that would have ruled out TLS entirely, and then later to mitigate attacks on TLS ciphersuites that are in fact worse (currently) than the RC4 vulnerability.

I know where you're coming from (YOU JUST HATE AMERICA) but this just isn't a politically volatile issue.


<tin foil hat> Is it plausible that the NSA chose to leak enough "hints" that lead to apparently-independent discovery of things like BEAST and M-t-E, making reverting to older and known-broken cyphers like RC4 seem to be "the correct pragmatic decision" (quite possibly seeding those discussions with ideas that lead even completely innocent open source developers to choose and justify why they've just baked crypto that's completely vulnerable to un(publicly)known NSA exploits)?

(It's a little hard these days to know what's a "paranoid fantasy", what's an "interesting cypherpunk plot", and what's "a realistic and/or confirmed NSA threat" - at least for me…)


If by "leaking hints" you mean "screaming at the top of their lungs in public protocol design discussions not to do it this way until they got sick of being nibbled to death by committees of ducks and gave up" you might be interested in looking into the papers of one Ex-NSA P. Rogaway.


It's not political at all - if NSA has had viable attacks against RC4, doing little and not-carefully-scrutinized stuff like this to ensure it continues to be used widely would make a lot of sense.

I'd expect them to go so far as to attempt to stop publication of comparably-performant ciphers that could conceivably take its place. You'll recall the 1974 precedent involving IBM's independent discovery of differential cryptanalysis.

If there's one thing I'm even more vigilant about than distrusting the US government/military, it's not underestimating the lengths to which they will go to achieve their ends.

PS: This has nothing to do with md5.


There are some good advice on how to improve the situation in the appendix section.


I will be adding advices from the discussion here, so feel free to comment! :)


Hopely Cyanogenmod devs, if not Google itself, will fix it, now that they are aware. In 2010 it may not be seen as a priority, but since last June it is for everyone.


CyanogenMod merged a "fix" into the repo earlier today:

http://review.cyanogenmod.org/#/c/51771/

... only to revert it later:

http://review.cyanogenmod.org/#/c/51794/

The revert noted "TLS v1.0 + AES is a bad combo, and entirely possible to happen with these priority lists".

In other words, the proposed "quick fix" was dangerous. There's some reading material on BEAST attacks here:

https://blogs.akamai.com/2012/05/what-you-need-to-know-about...


Would this flaw be "patchable" using Cydia Substrate for Android? Might be good as a quick fix.


<alarmism>oh by the way the stock android browser up to version 4.2 and maybe beyond LEAKS ANYTHING YOU TYPE IN THE ADDRESS BOX IN CLEARTEXT OVER THE NET. </alarmism>




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: