Hacker News new | comments | show | ask | jobs | submit login
Why Google is Hurrying the Web to Kill SHA-1 (konklone.com)
423 points by mathias 1138 days ago | hide | past | web | 131 comments | favorite



Two nits, both pedantic:

An attack on SHA1 that makes certificate forgery viable within the next few years doesn't seem very likely, although over the long term it might be. The attack on SHA1 isn't like the attacks on RSA-1024; my sense is that the literature already knows how to break RSA-1024 given enough compute, but does not know how to do that with SHA1. Further, factoring RSA-1024 provides an attacker with a total break of RSA-1024 TLS, but not every attack on SHA1 will necessarily do the same.

Second, there's a subtext that SHA-3 having been standardized somehow puts the writing on the wall (albeit, a far-away wall) for SHA-2. Not so; SHA-2 could remain secure (in its TLS certificate use case) indefinitely.


To out-pedant you: even assuming that the differential collision attacks we know about are incorrect [1], we absolutely know how to break SHA-1 given enough compute, that is, roughly the same resources needed to break RSA-1024. The answer is generic collision finding with parallel rho [2].

[1] https://marc-stevens.nl/research/papers/EC13-S.pdf [2] http://people.scs.carleton.ca/~paulv/papers/JoC97.pdf


You haven't so much out-pedanted me as refuted me. :)


I added links to both papers to the bottom, and removed the "we'll probably need to upgrade" to SHA-3 sentence fragment.


It seems that the identical prefix collision would be good to investigate doing an ASIC on.


I don't know if "10 years" falls in your definition of "next few years".

For a viable rogue CA attack, you need a chosen-prefix attack. Current best research (https://marc-stevens.nl/research/papers/EC13-S.pdf) shows it should take 2^77.1 SHA-1 compression calls to do a chosen-prefix attack. Say this is improved to 2^65 within the next 10 years. Right now a good GPU (AMD R9 290) can do 3 billion SHA-1 compression calls per second. Say Moore's Law continues for the next 10 years and that 10 years from now a GPU can do 20 billion SHA-1 per second. So 10 year from now, 100 high-end GPUs should be able to produce a rogue CA with colliding SHA-1 signature in 7 month of compute time.

Change one little assumption and assume the best attack ends up being 2^60 instead of 2^65. In this case, a viable attack could certainly be carried out in the next 3-4 years.

You can't cross your fingers and hopes such an attack will not be discovered. The time to abandon SHA-1 is now.


Firstly, GPUs haven't followed More.

Secondly, multiple sha1 ASIC exists.

Thirdly, WebGL has made it trivial to gain vast GPU resources. 20,000 viewers for two hours can be bought for $20.

Fourthly, I don't care.


> Firstly, GPUs haven't followed More.

Yes they have. Any integrated circuit that tries to pack as many transistors as possible on a die is, by definition, following Moore's Law. To convince you: http://www.mumblegrumble.com/visual/roadmap/other/nvidia_moo...


> 20,000 viewers for two hours can be bought for $20

Is that pricing from a botnet or a company like crowdprocess.com?


I agree.


@tptacek - I tried to include enough detail to make it clear that a SHA-1 forgery isn't as trivial as a brute force. That you'd have to "coax a Certificate Authority" into issuing you a targeted forgery, and that that's what the MD5 team did.

The SHA-3 mention at the very bottom was in the spirit of "all things are broken eventually", not a specific comment on SHA-2 (though my understanding is that there are some conceptual weaknesses that have been identified). I don't think I've confused the issue there, but if I see confusion I'll definitely update it.


Another way to think about SHA2 and SHA3 is that it's entirely possible that SHA3 could fall before SHA2 does. They are unrelated algorithms.

I'm also not comparing attacks on SHA1 to brute force (which is also not how MD5 fell).

It would be helpful, when people posit attacks on SHA1, if they'd cite the literature they're referring to.


> Another way to think about SHA2 and SHA3 is that it's entirely possible that SHA3 could fall before SHA2 does. They are unrelated algorithms.

Very good point, though I would expect SHA2 to see far more research on weakening it. It's been around a lot longer, and its wider deployment makes it a much higher value target. (Is SHA-3 supported anywhere right now?)


You can easily make the converse point and claim that SHA2 has a higher probability to resist future cryptanalysis than SHA3, given that SHA2 has already had a lot more research than SHA3, but is still not broken. "Old" is a feature in this sense. The only issue I know about with SHA2 is its length extension property. And that is by design.


Why SHA-2 instead of RSA 4096 or SHA-256? Even the RSA is compromised but 4096-bit will take a lot more(maybe few more hours) resources to decrypt.

==edited==

Thank you for the reply.


SHA-256 is one of SHA-2's hash functions: http://en.wikipedia.org/wiki/SHA-2


SHA-256 is a form of SHA-2.


You use a hash function (e.g. SHA256) to make the hash of the page and a signature algorithm (like RSA 4096) to sign it.


my sense is that the literature already knows how to break RSA-1024 given enough compute

"given enough compute", we can break any crypto just with pure bruteforce, although in practice I believe there's a point at which the amount of power that would be required becomes physically impossible due to the limits of computation within the universe (i.e. Moore's Law will definitely end sometime). To me, that says using extremely large hash sizes can keep things quite secure - even attacks that reduce complexity by many orders of magnitude could be impossible in practice - e.g. a 2048-bit hash for which a 2^500 complexity attack is found won't be any less practically secure.

...unless we somehow discover that P = NP, in which case the world could become a very interesting place...


A while back I launched a SSL scanner [1] and got tons of feedback from people at Facebook, Google, Microsoft.

The most divisive item was how to represent SHA1 deprecation. The OPs article doesn't really touch on it, but the reason that Google and everyone else haven't moved on is that there still exist a sizeable number of clients that can only accept SHA1 (and will error on anything else).

I actually suspect that large sites like Facebook, etc will maintain multiple certs at the different levels and dynamically serve the best one up that the client can support. They're already doing things like only serving HSTS to browsers that identify as Chrome, etc.

1 - https://www.expeditedssl.com/simple-ssl-scanner/scan?target_...


> I actually suspect that large sites like Facebook, etc will maintain multiple certs at the different levels and dynamically serve the best one up that the client can support.

How would you do that? When the TLS connection is established you know nothing about the client except its IP address. All of the interesting information about the browser is transported via the HTTP stream which is tunneled inside the TLS connection.

HSTS is simple by comparison, as it's only an HTTP header.


I think they would use the list of ciphers the browser says it will accept in the TLS handshake, or maybe TLS version.

Anything from ClientHello could be used: http://en.wikipedia.org/wiki/Transport_Layer_Security#Basic_...


Not true; consider SNI as an example of the server choosing a certificate as part of the handshake, without a cleartext exchange of the hostname.

http://en.wikipedia.org/wiki/Server_Name_Indication


Actually the SNI extension is sent in the clear. That's one of the things TLS 1.3 is supposed to fix. (See e.g. http://www.ietf.org/mail-archive/web/tls/current/msg10484.ht... for a discussion about how to handle SNI there). You have a point, though, in that the TLS extensions sent by the client might give you some indication with what client you are talking with. I would not hope for it though, and even if, such heuristics are hell of an ugly hack inside the TLS stack.


Actually, in the case of SNI, the hostname IS sent in plain text. It's sent with the initial ClientHello message so that the server can use it to select the proper server certificate for the session.


a) This is great. Also, a friend linked me to Expedited SSL yesterday and I link to it in the bottom of this post.

b) I think its SHA-1 scanner is mistaken - it flags my site as using SHA-1, but it's SHA-2 in every cert in its chain: https://www.expeditedssl.com/simple-ssl-scanner/scan?target_...


The root CA, USERTrust, is SHA-1 signed.

Everything else is SHA-2 however as you said.


That's right, but the root cert is not sent by the server (in my case). More importantly, SHA-1 isn't a problem for root certs, as their signature is not used to verify their integrity.


Then what is the signature for?


Because it was seen as easier to use X.509 (aka a certificate) as a delivery of a Trust Anchor (specifically, a "subject name" and "public key" pair) than to invent yet another storage format.

Certificate verification stops when you encounter a TA. Some libraries do the wrong thing and check that the terminal cert is self-signed, but that's not actually required (nor recommended). You just check that the previous cert is signed by the TA, which involves checking the previous cert's signature (aka the intermediate) with the trust anchor's key.

That's why the trust anchor's signature is irrelevant.

See RFC 6024 for a discussion of terminology and concepts.


That makes perfect sense, thanks!


Simplicity and uniformity?

There's less chance to screw things up if the spec says that root certs should look exactly like all other certs, rather than trimming out parts that shouldn't be needed.


Thanks for the link! Also, I'm investigating the issue as you are correct that your certs all look good.


I'd like to see them gradually downgrade all non-PFS connections. Non-PFS connections should be considered medium-to-highly vulnerable, and shouldn't receive a green icon in browsers.

Unfortunately, they've just recommended everyone to use "2048-bit keys" when they announced the HTTPS Google ranking policy. A lot of developers won't understand the difference between a 2048-bit RSA key and a 256-bit ECC key, so they'll just pick RSA, since "Google said 2048-bit keys!". Sooo...maybe this policy will come in 10 years.

http://googlewebmastercentral.blogspot.com/2014/08/https-as-...


This seems to be a case of a slightly unfortunate wording.

Google calls for the use of 2048-bit key certificates, a very reasonable demand. In the forward secure use case, the certificate key is only used for authentication. Using, say, ECDHE_RSA as your key exchange mechanism allows for small but secure elliptic curve keys (EC), forward security (DHE) and uses the certificate's RSA key for initial authentication (RSA).

Certificates can actually use ECDSA keys, and some companies will support this (Symantec and CloudFlare off the top of my head), but I'm not exactly sure about browser support. The chief advantage, as far as I know and assuming no new breaks in RSA, is a strong reduction in certificate file size (256-bit vs 3k RSA equivalent), not forward security.


Everyone is vulnerable: https://www.google.com, https://www.facebook.com, https://www.svyft.com as per the link provided in the article (https://shaaaaaaaaaaaaa.com)


Luckily shaaaaaaaaaaaaa.com itself is fine: https://shaaaaaaaaaaaaa.com/check/shaaaaaaaaaaaaa.com


From the OP:

If you poke around Google's SSL configuration, you'll see that (!) they use certificates signed with SHA-1. But each certificate expires in 3 months, a short-lived window that reduces the chances that a certificate could be forged, while they migrate to SHA-2 in 2015.


If going SHA-2 only requires a request flag, why so long for a transition? Is there some downside (e.g. old clients that don't support it) that holds Google off?


First, it requires more than just a request flag, since that flag only affects the signature algorithm in your certificate signing request. Your certificate authority has to actually support signing certificates with SHA-2, and also needs a chain that uses SHA-2 signatures. There are some certificate authorities that are lagging behind here, such as RapidSSL.

Second, there are old clients out there that still don't support SHA-2. Namely, pre-SP3 Windows XP and pre-2.3 Android.

Edit: originally this comment said that only IE on pre-SP3 Windows XP was affected; apparently Chrome on pre-SP3 is as well, presumably because it uses some system libraries.


Windows XP SP 2 (SP 3 is fine) and early Android, I believe, are the clients that don't support certs later than SHA-1.


Just curious — does X.509 support multiple signatures, so both SHA-1 and SHA-2-based sigs could be included, one for legacy user-agents and one for modern ones?


It has to be so frustrating to Google that the people responsible for Android make it so hard for users to upgrade to versions that support SHA-2.


It's not Google's fault the telecoms cripple every phone they sell.


Well, Google knew it would happen, and allowed them to do it.


There are also other (mostly unsupported) mobile devices which don't. Like old ebook readers which have browsers for some reason.


Lots of old clients still out there, including people who don't have the option to upgrade.

A couple of years ago we tried upgrading our certificate to SHA-2, and rolled it back within an hour, because it broke the site for several of our customers.

It might work now; IE6 users have (finally!) dropped to about 0, but we certainly have tons of IE7 users, and I'll have to look up versions of Windows they're using before we try it again.

We work with hospitals whose IT departments who need to control changes to their computing environments extremely carefully, and upgrades are unfortunately quite expensive and difficult for these kinds of environments.


Fortunately, hopefully most enterprises moved to SP3 years ago. MS officially dropped XP SP2 support in mid 2010, and while MS does do Custom Support for older service packs, of course I hope no one is relying on it now. I think Custom Support for WinXP SP3 is IE8 only after the first year BTW.


My company website is one of the only ones I tested that actually passed. At first I thought there was simply a shocking lack of adoption, but based on this thread, seems like there is some level of "wow, this is nowhere near as common as it should be," and some level of the tool being somewhat overly strict on what passes.

Either way, Google has made it pretty clear that they want at least SHA-2 certificates, which, so long as they call it out in address bars, warning interstitials, and make noise about SERP impact, means that this is the way things are going.


You have it kind of backwards. Not these sites or their certificates are vulnerable, but the certificate signing process itself is. And by extension all browsers that accept SHA1 certificates anywhere are. To clarify, what the attack does is generate two certificates that have the same SHA1 hash, one of them legitimate and one of them illegitimate. In the worst case scenario the illegitimate one is an itself an intermediate CA, which means the illegitimate one can MITM any connection. You send the legitimate one to the CA to get it signed. When you get it back you swap the legitimate certificate with the illegitimate one - which is possible as both have the same hash - and voila, you have broken TLS world wide.

Having a certificate with SHA2 will not save you. A client under attack will not even see it. The only thing that helps is stop accepting SHA1 certificates (and especially SHA1 intermediate CAs) globally. All this stuff about accepting short lived certificates is only a publicity stunt by Google to raise awareness about the issue (an attacker can forge a certificate with any expiry time she wishes).


Vulnerable to what? Perhaps better to say, "everyone would be vulnerable".


[deleted]


Someone on HN knows this subject much better than I do, but as I understand it, there's no attack in the literature that takes a good certificate request and $2MM as an input and spits out a validating certificate as an output.

This is different than the situation with MD5, where the components needed for a successful attack were known to the literature, and the real work was (a) scaling the attack so that it could perform within the time windows needed to forge a TLS certificate and (b) putting all the pieces together.

(But see upthread with 'pbsd, who is one of those people on HN who knows the subject much better than me).


Https is to prevent wiretapping and man-in-the-middle attacks.

The problem now is even you established HTTPS connection, the weak SHA-1 encryption will not protect you.


The SHA1 vulnerability being contemplated here affects only the establishment of an HTTPS connection; the attack scenario involves obtaining a forged certificate.


That's an especially pedantic correction, as it does not impact the meaning of the statement.

s/the weak SHA-1 encryption/the weakened SHA-1 hash used to verify the certificate that's used to authenticate the encrypted connection/


It is, you're right. It's a hobbyhorse of mine, though, because SHA1 (and MD5) appear in TLS ciphersuites as MAC components, and those uses are not known to be vulnerable at all.


So where is the fully automated solution for rotating certificates?

I've been looking for a CA who will provide an API to send the cert request, an easy way to prove the domain ownership which doesn't involve SMTP, and the signed cert handed straight back from the API, but haven't found it.

So far the most I've been able to streamline my certificate requests is to automate generating the CSR, skip setting the MX record, just bind SMTP to www.domain.com, get the validation email at 'admin@www.domain.com' and auto-forward to my actual email address... so it's mostly automated, but I still have to copy/paste the cert request string into the CA's webform, click the 'Approve' link in the forwarded DV mail, and then copy/paste the final cert from inside email back to the shell where it can finish the import.


I'm working on this problem: https://sslmate.com/

Right now it's just a command line client, but a public API is in the works. And this week we'll be announcing a solution to the cert rotation problem (basically, you'll be able to drive your renewals from cron - it's going to be really cool). You might want to follow @sslmate on Twitter - this is just the beginning of some very exciting stuff for automating SSL cert deployment. Also feel free to email me (address is in my profile).

Sadly, we're still SHA-1 only, because that's all that our certificate authority (RapidSSL) supports at the moment. On the other hand, once we make renewals dead simple, you can just buy 1 year certs and it won't be a big deal upgrading to SHA-2 in a year's time. (After all, even Google is still using SHA-1, but they can easily switch thanks to their 3 month certs and well-oiled cert deployment machinery.)


Very interesting, thanks! I unpacked the .deb, the nodejs source is pretty easy to follow, so I'd say you pretty much already have the public API done. ;-)

The /link API is interesting, versus generating a token on your site through the UI. You might want to consider allowing an explicit $$ limit on /buy, since you store the api-key in the clear (albeit in a config file set to 0600).

It looks like you still rely on being able to receive an email on the domain and click an approval link, though. I'm sure this is a RapidSSL requirement, but it makes full automation more complex (certainly not impossible).


Thanks for checking it out! Yeah, it's really just a matter of documenting the API ;-)

Unfortunately it does rely on being able to receive an email, as this is a requirement of virtually all certificate authorities, though the email address in the whois record is also an option (at least for TLDs which list the email in whois). I have some ideas to make this easier for users who don't otherwise receive mail at their domains, such as by letting them point their MX record at sslmate.

A configurable $ limit on /buy is a very good idea; also I should make it possible for users to use sslmate without permanently storing their API credentials on the filesystem.


Are you certain RapidSSL is not accepting SHA-2 is not accepted by RapidSSL? Section 6.1.5.1 of http://www.rapidssl.com/resources/pdfs/geotrustCPSv1dot10.pd... seems to indicate that they do support SHA-2.

Did you speak with them about this yet?


Commercial solutions exist too; the company I work for (Venafi https://www.venafi.com) makes certificate management software that is designed to solve this very problem.


Understanding that this is a naive outsider perspective, I find it strange that it's any sort of emergency when a single collision has yet to be produced. And then, does the latest hash collision attack allow you to make a collision with a _specific_ target or just make a collision in general? Finally, even if you hit the target with some junk that happens to hash to the same thing, is it going to be in correct file format, and within an acceptable size? It seems like there are a handful of hurdles for the bad guys to go over before we're in danger.

I know crypto is not to be taken lightly, and I'm glad people would rather be safe than sorry, and I'll avoid SHA-1 in my own personal security use (`sha256sum` is sha-2 right?). I'm just curious.


> when a single collision has yet to be produced

No collisions published to the public doesn't mean no collisions have been found.

As the article says; “we should assume that the worst vulnerabilities go undisclosed.”


When a collision is produced it will be too late. The time to act is before that happens.


I guess that's the surprising part. I figured that that's just the first hurdle, there's still the file length and format.


The point is that currently producing a single collision may cost a couple million dollars of brute force for now. So we should expect it to be used (there are attacks where that much money is invested, either a very valuable target or very many low-value targets), but we should expect to see one only after a highly targeted attack is detected and analyized - in the case of Flame, those steps took a few years.

Some government agency MITM'ing major social sites or email providers would be rather possible at that cost.


An unrelated annoying thing: I want to disable TLSv1 support for my site, for obvious reasons. I don't care about backwards compatibility for my personal site, but I still can't flip the switch... because Googlebot doesn't support anything newer than TLSv1.


Thank you google.

I'm a developer, but I'm not responsible for SSL cert acquisition.

The ONLY way I can get the people responsible for that to stop using SHA-1, is to tell them that user's browsers are sending a warning/error message on it.

I will eagerly await Chrome doing that.


It's surprising how much energy Certificate Authorities invest into arguing about this. Instead, they should invest that energy into improving their SHA-2 support and helping their customers migrate.


They are businesses. Their customers are mostly businesses. SHA1 is, for most businesses that only want a padlock to reassure their customers, just peachy. A CA that hassles their customers and says "you need to do complicated extra work" is put at a disadvantage to other CA's that have a "customer is always right" kind of attitude. Combined with tools that default to SHA1, and customers that may depressingly actually have Windows XP SP2 terminals still in production use, and you get feet dragging.

I don't think this is inherently a problem with the CA model. Rather it's what you'd expect in a competitive market that is basically selling a binary commodity product (a padlock icon), given textbook economics.


What's weird though is that they have a consortium. They could have all agreed simultaneously to stop issuing SHA1 certs years ago and at no market loss. But they didn't.


No. They've been quite clear about this. CA's are still selling SHA1 certs because customers are asking for them. They're asking for them because they're compatible with more apps/devices and - until now - browsers treated them the same. So why sacrifice compatibility for no improvement.

The Chrome team are right to push this along, but I do have some sympathy for the CA's here too. I read the whole discussion and it's pretty clear that there was some epic miscommunication going on here. Notably Google thinks that removing the padlock icon is not "deprecation" according to the timetable Microsoft established, but all the people buying certs disagree; that's why they're doing it.


> So why sacrifice compatibility for no improvement.

No improvement? How about improvement in security?


The reality is most people running websites that use SSL are not security experts and cannot evaluate the weakness of SHA1 vs SHA2. So they trust the defaults, which are SHA1 (in e.g. openssl). Their goal is to get the padlock, not to achieve some other notion of security.


Its just like general aviation, changing anything means anyone who got p0wned (or crashed) in the last decade is going to file a david vs goliath lawsuit with the change itself as evidence of negligence. But if they change nothing, then they admit to no mistake.


Nothing should surprise you about the obstinacy of CAs.


Would be interesting to know how this affects Git version control, which has SHA-1 at its core.


Google's decisions with respect to certificates don't really affect git much. But, if you're curious about how git handles pulling from repositories containing malicious hash collisions, Linus has talked about that in the past:

> If it has the same SHA1, it means that when we receive the object from the other end, we will not overwrite the object we already have.

> So what happens is that if we ever see a collision, the "earlier" object in any particular repository will always end up overriding. But note that "earlier" is obviously per-repository, in the sense that the git object network generates a DAG that is not fully ordered, so while different repositories will agree about what is "earlier" in the case of direct ancestry, if the object came through separate and not directly related branches, two different repos may obviously have gotten the two objects in different order.

> However, the "earlier will override" is very much what you want from a security standpoint: remember that the git model is that you should primarily trust only your own repository.

> So if you do a "git pull", the new incoming objects are by definition less trustworthy than the objects you already have, and as such it would be wrong to allow a new object to replace an old one.

...

> in this case, the collision is entirely a non-issue: you'll get a "bad" repository that is different from what the attacker intended, but since you'll never actually use his colliding object, it's literally no different from the attacker just not having found a collision at all, but just using the object you already had (ie it's 100% equivalent to the "trivial" collision of the identical file generating the same SHA1).

http://stackoverflow.com/q/9392365


I assume you would need to forge a meaningful (and potentially harmful) commit with the same SHA-1 as an existing one to do arm. That's probably more difficult than forging an SSL certificate (since the actual contents of the blob are more constrained that the certificate file, probably). I'm also not really sure what would happen if commits made after the "compromised" one happened to conflict with it but I'm pretty sure the devs would notice something fishy going on pretty quickly.

That being said git/mercurial and friends will have to transition to an other hashing algorithm sooner or later but it's not as urgent as web certificates security-wise.


I'd think the content of the blob is less constrained, because you can put whatever padding fluff you need at the end of the commit message to adjust the sha1 of the commit. To cause harm, you'd include the trojanized code in a different file which your colliding commit blob references. (So, you'd need to include multiple blobs, but only the commit-message one needs to collide with an existing one)

A collision in commit hashes would mean you could no longer trust a signed tag, for example.

Look up the old stripe-ctf "gitcoin" challenge and you'll see it really is quite easy to meddle with commit hashes :)


It seems to me like the time horizon for an attack on git based on SHA-1 collisions would be much, much longer than other similar collision attacks (like signing binary executables), because of the sequential nature of version control. Presumably the utility of such an attack would be to maliciously insert code into an earlier version of the code to hide its origins, in which case for each commit on the file, they'd need to calculate a collision that contains their changes plus whatever legitimate changes have been made to that file.

I'm not familiar enough with the inner workings of git to know, but I imagine it would be a pain to juggle updating new commits from people working from a local copy of the repo - potentially your malfeasance would be detected quickly if you tried to calculate a diff between the old file and the new file from the local copy (which would presumably not be updated, given that the checksums match).

That said, if people are in the habit of signing their tags/commits using SHA-1, then that would be just as vulnerable as any other signing problem.


This is for SSL and certificate validation - Google's move won't affect git in any way.

Git uses it to ensure that the data that comes out is exactly what went in (like a much better checksum than md5 or crc32 etc).

I don't know what the security implications are on the git side... I suppose an attacker could try to figure out how to change source code in a way that it preserves a commit log.


One would think it would have been a good opportunity to change to SHA-2 after Heartbleed, since most websites had to get reissued certificates anyway. Since this process is a pain in the * then one could have killed two birds with one stone at the time. Alas


In fact, Heartbleed helped a lot: http://news.netcraft.com/archives/2014/05/05/sha-2-very-cryp... But there's a long way to go.


Also its not exactly fair to compare the Flame attack on MD5 and compare it immediately to SHA-1. Unless you are the US or China you likely don't have the resources necessary to pull off that sort of attack.

The Flame attack's math was invented by an internal government cryptographic think tank. And still had to leverage massive computational power, just not in the order of 100's of millions.

The idea a rogue group who have access to (both of) these resources is slightly idiotic. It would be far easier for them to attack RSA directly if you had 10's of millions of dollars of computers. There are a lot of 1024 bit certs you could pick off for easy profit.


Is it not far easier to acquire the necessary computing power than it was in past? As more and more people converge on the internet, the attack vector increases dramatically as well. How much easier is it today to get the same botnet the government used for Flame? Furthermore, the idea of safety behind computational difficulty is going to be eroding away quicker than ever, in the future.


Botnet.


As I stated.

The computation power needed to break SHA-1 is higher then attacking RSA. So if you are financially motivated attacking RSA has a higher ROI.


While true, "There's something more profitable to attack at the moment" seems like a lousy way to handle security.


I never said moving away from SHA-1 was irrelevant. I was simply stating that they were overestimating how common the FLAME attack could be pulled off.

Flame like Stuxnet are state of the art. Highly funded state of the art. A lot of security researchers look at these things and simply say, "Are you shitting me?!"

Call me optimistic but I highly doubt we'll be uncovering a new stuxnet every single year.


> The computation power needed to break SHA-1 is higher then attacking RSA[-1024].

You better back that assertion with something. Even in the worst possible case (for SHA-1), it seems to me that SHA-1 is cheaper to collide than it is to factor a general 1024-bit integer.


Exactly.


This article is amazing! I work in the SSL industry and this is a huge help in summarizing exactly whats going on.

@konklone, have you followed the CAB Forum's mailing on this topic? Its the most I've seen them argue in well over a year.


I have, it's one of the links in there. I truly loved reading that discussion.


Just to be clear, since I often end up confused on this point — is the use of SHA-1 with HMAC, outside of the context of SSL, still acceptable?


The use of SHA1 with HMAC, inside as well as outside of the context of SSL is still acceptable, yes. Even against a nation state attacker. The reason attacks on HMAC(k,m)~=SHA1(k||SHA1(k||m)) are much more difficult than general collision attacks is that as an attacker you do not know the internal state of the hash function when you are trying to create a collision with m, as the secret key is input in the hash function first.

The attack vector for TLS certificates is the (asymmetric) signature which signs a plain SHA1 hash message. I'm still disappointed that after all the experiences they have had with MD5 they haven't yet started to randomize the hash in the signature (sign SHA1(random||message) instead of SHA1(message) ) which would boost the signature security to that of HMAC.


It depends how motivated your attacker is. Could someone make a lot of money if they could get SHA-1 collisions with your application?


Seems like the SSL certificates that CloudFlare automatically generates for sites are SHA-1 signed.

Anyone know if they're planning to upgrade to SHA-2?


We are going to do the following:

1. Reissue our SHA-1 based certs to meet the deadlines specified by Chrome so that no customer sees a warning in Chrome.

2. In the future, we will also have an automatic fallback system so that for poor clients (that only support SHA-1) we are able to dynamically provide 'old' certificates. For up to date clients we will not use SHA-1 at all.


They can be seen in the Chrome discussion thread, complaining to Google that the timeline is too aggressive. But they're a good company, and I imagine they'll update as soon as they can.


Ah interesting, thanks.

(For anyone else, here's the link: https://groups.google.com/a/chromium.org/d/msg/blink-dev/2-R...)


This is a great thread. Thanks for posting it.


The issue here is old clients... Does anyone know how old clients would handle SHA-2 certs, would they just get a warning saying the site is insecure but still be able to visit the site over an encrypted connection or do they break completely... I guess - I'll have to run a few tests this afternoon and see how windows XP performs.


Windows XP will need Service Pack 3 to support SHA2 certs.

Windows Server 2003 will require a couple manual hotfixes to get SHA2 support.

http://blogs.technet.com/b/pki/archive/2010/09/30/sha2-and-w...


Please let us know. Here's a table showing when SHA-2 support was added to various browsers.

http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_br...


I've seen in a lot of places that Chrome only supported SHA-2 since version 26 (2013). I find this hard to believe (Firefox supported it since 2005) and I can't find a solid reference for it. However I note that this page from 2008, says Chrome supports it https://www.tbs-certificates.co.uk/FAQ/en/476.html (e.g. from version 1)


"SHA1 and other hash algorithms generate a digital fingerprint that in theory is unique for each different file or text input they sign."

... and there it goes, any credibility I would give the author. There's dumbing down the content for a non-technical audience, and there's not understanding.


what's wrong with that statement exactly?


Two things:

1. Hashes don't "sign" things (not directly anyway)

2. Hashes aren't unique in theory or practice (using a 256-bit hash on every 257-bit number will generate 2^256 collisions by the pigeonhole principle).


High quality hashes are unique in practice.

Suppose every person generates 1 billion files a second * 7 billion people * 1,000 years = ~3x10 ^ 28 call it 10^29. For a collusion among non identical files using a good 256 bit hash you get ~1/(2^256) * (10^29) * (10^29) = ~1/(2^198).

Or 1 chance in ~4 * 10 ^ 59 of finding even one collision.


Your math is off a bit[0][1] but you're right, it's a vanishingly small probability of a single collision. This is fairly academic though, when you're talking about an adversary exploiting weaknesses in the algorithm itself, and not a perfect PRF.

[0] http://preshing.com/20110504/hash-collision-probabilities/

[1] http://www.wolframalpha.com/input/?i=%281+billion+*+7+billio...


Ops, when counting exponents make sure there in the same base.


maybe the use of "sign"?


I could have said "practically unique", instead of "in theory is unique" there, but I make the distinction more clear just below that and note that there are always collisions out there.


What does stop Google Chrome simply disallow new SHA-1 hashes that collide with known list of SHA-1 hashes for existing certificates?

That would allow non-colliding SHA-1 certificates function as usual and prevent millions of people from major headaches related to speedy certificate migration.


Or cut to the chase and just deploy DANE, so that we don't need CAs to sign anything?

http://en.wikipedia.org/wiki/DNS-based_Authentication_of_Nam...


I'm not saying the conclusion is wrong, but the reasoning likely is: there's a huge difference between a collision attack and a so-called second pre-image attack [1]. To impersonate a website protected with an SHA-1 certificate you'd have to mount the second kind.

> Walker's estimate suggested then that a SHA-1 collision would cost $2M in 2012, $700K in 2015, $173K in 2018, and $43K in 2021.

If you adjust those cost estimates for the fact that a second pre-image is needed they look more something like this:

An SHA-1 second pre-image attack (needed to e.g. impersonate an SSL protected website) would likely cost about 10^26 USD in 2021... By comparison world GDP is only about 10^14 USD.

Better safe than sorry though. :)

1. https://www.ietf.org/mail-archive/web/pkix/current/msg30395....


Can someone post a summary of the part of the story hinted to by the headline? I couldn't find it.


It's one of the links early in the story. (It's pretty link heavy, so it can be hard to miss. 6th link in).

Basically, by 2015, Google will update Chrome to show websites with HTTPS certificates using SHA-1 as "insecure." The level of insecurity shown will get more severe over time, as well as being based on when your certificate expires. Here is the full link for details:

http://googleonlinesecurity.blogspot.com/2014/09/gradually-s...


That's What. The headline hints at Why, but never delivers.


This entire article is about Why. Read the "An attack on SHA-1 feels plenty viable to me" section for the most info.


It's a roundup of last week's news, and (hopefully) better all-around explanation and background for people with less technical knowledge. It also points people to another tool I made, https://shaaaaaaaaaaaaa.com, to actually do the SHA-1 check.


I see -- it's not announcing any news, or any new theories; it's just a roundup of last week's news.


Yes. Much like last week's news was. :-)


When I use the sha tool against google.com, it shows them using SHA-1.


Google has stated they wont complete their transition until 2015


SHA1 is the only supported hash algorithm for PGP key fingerprints.


Ironically, www.google.com - Itself is using SHA-1 :)


Ironically, www.google.com itself is using SHA-1

Ref: https://shaaaaaaaaaaaaa.com/check/www.google.com


Another PR stunt Google ? No that wont work.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: