An attack on SHA1 that makes certificate forgery viable within the next few years doesn't seem very likely, although over the long term it might be. The attack on SHA1 isn't like the attacks on RSA-1024; my sense is that the literature already knows how to break RSA-1024 given enough compute, but does not know how to do that with SHA1. Further, factoring RSA-1024 provides an attacker with a total break of RSA-1024 TLS, but not every attack on SHA1 will necessarily do the same.
Second, there's a subtext that SHA-3 having been standardized somehow puts the writing on the wall (albeit, a far-away wall) for SHA-2. Not so; SHA-2 could remain secure (in its TLS certificate use case) indefinitely.
For a viable rogue CA attack, you need a chosen-prefix attack. Current best research (https://marc-stevens.nl/research/papers/EC13-S.pdf) shows it should take 2^77.1 SHA-1 compression calls to do a chosen-prefix attack. Say this is improved to 2^65 within the next 10 years. Right now a good GPU (AMD R9 290) can do 3 billion SHA-1 compression calls per second. Say Moore's Law continues for the next 10 years and that 10 years from now a GPU can do 20 billion SHA-1 per second. So 10 year from now, 100 high-end GPUs should be able to produce a rogue CA with colliding SHA-1 signature in 7 month of compute time.
Change one little assumption and assume the best attack ends up being 2^60 instead of 2^65. In this case, a viable attack could certainly be carried out in the next 3-4 years.
You can't cross your fingers and hopes such an attack will not be discovered. The time to abandon SHA-1 is now.
Secondly, multiple sha1 ASIC exists.
Thirdly, WebGL has made it trivial to gain vast GPU resources. 20,000 viewers for two hours can be bought for $20.
Fourthly, I don't care.
Yes they have. Any integrated circuit that tries to pack as many transistors as possible on a die is, by definition, following Moore's Law. To convince you: http://www.mumblegrumble.com/visual/roadmap/other/nvidia_moo...
Is that pricing from a botnet or a company like crowdprocess.com?
The SHA-3 mention at the very bottom was in the spirit of "all things are broken eventually", not a specific comment on SHA-2 (though my understanding is that there are some conceptual weaknesses that have been identified). I don't think I've confused the issue there, but if I see confusion I'll definitely update it.
I'm also not comparing attacks on SHA1 to brute force (which is also not how MD5 fell).
It would be helpful, when people posit attacks on SHA1, if they'd cite the literature they're referring to.
Very good point, though I would expect SHA2 to see far more research on weakening it. It's been around a lot longer, and its wider deployment makes it a much higher value target. (Is SHA-3 supported anywhere right now?)
Thank you for the reply.
"given enough compute", we can break any crypto just with pure bruteforce, although in practice I believe there's a point at which the amount of power that would be required becomes physically impossible due to the limits of computation within the universe (i.e. Moore's Law will definitely end sometime). To me, that says using extremely large hash sizes can keep things quite secure - even attacks that reduce complexity by many orders of magnitude could be impossible in practice - e.g. a 2048-bit hash for which a 2^500 complexity attack is found won't be any less practically secure.
...unless we somehow discover that P = NP, in which case the world could become a very interesting place...
The most divisive item was how to represent SHA1 deprecation. The OPs article doesn't really touch on it, but the reason that Google and everyone else haven't moved on is that there still exist a sizeable number of clients that can only accept SHA1 (and will error on anything else).
I actually suspect that large sites like Facebook, etc will maintain multiple certs at the different levels and dynamically serve the best one up that the client can support. They're already doing things like only serving HSTS to browsers that identify as Chrome, etc.
1 - https://www.expeditedssl.com/simple-ssl-scanner/scan?target_...
How would you do that? When the TLS connection is established you know nothing about the client except its IP address. All of the interesting information about the browser is transported via the HTTP stream which is tunneled inside the TLS connection.
HSTS is simple by comparison, as it's only an HTTP header.
Anything from ClientHello could be used: http://en.wikipedia.org/wiki/Transport_Layer_Security#Basic_...
b) I think its SHA-1 scanner is mistaken - it flags my site as using SHA-1, but it's SHA-2 in every cert in its chain: https://www.expeditedssl.com/simple-ssl-scanner/scan?target_...
Everything else is SHA-2 however as you said.
Certificate verification stops when you encounter a TA. Some libraries do the wrong thing and check that the terminal cert is self-signed, but that's not actually required (nor recommended). You just check that the previous cert is signed by the TA, which involves checking the previous cert's signature (aka the intermediate) with the trust anchor's key.
That's why the trust anchor's signature is irrelevant.
See RFC 6024 for a discussion of terminology and concepts.
There's less chance to screw things up if the spec says that root certs should look exactly like all other certs, rather than trimming out parts that shouldn't be needed.
Unfortunately, they've just recommended everyone to use "2048-bit keys" when they announced the HTTPS Google ranking policy. A lot of developers won't understand the difference between a 2048-bit RSA key and a 256-bit ECC key, so they'll just pick RSA, since "Google said 2048-bit keys!". Sooo...maybe this policy will come in 10 years.
Google calls for the use of 2048-bit key certificates, a very reasonable demand. In the forward secure use case, the certificate key is only used for authentication. Using, say, ECDHE_RSA as your key exchange mechanism allows for small but secure elliptic curve keys (EC), forward security (DHE) and uses the certificate's RSA key for initial authentication (RSA).
Certificates can actually use ECDSA keys, and some companies will support this (Symantec and CloudFlare off the top of my head), but I'm not exactly sure about browser support. The chief advantage, as far as I know and assuming no new breaks in RSA, is a strong reduction in certificate file size (256-bit vs 3k RSA equivalent), not forward security.
If you poke around Google's SSL configuration, you'll see that (!) they use certificates signed with SHA-1. But each certificate expires in 3 months, a short-lived window that reduces the chances that a certificate could be forged, while they migrate to SHA-2 in 2015.
Second, there are old clients out there that still don't support SHA-2. Namely, pre-SP3 Windows XP and pre-2.3 Android.
Edit: originally this comment said that only IE on pre-SP3 Windows XP was affected; apparently Chrome on pre-SP3 is as well, presumably because it uses some system libraries.
A couple of years ago we tried upgrading our certificate to SHA-2, and rolled it back within an hour, because it broke the site for several of our customers.
It might work now; IE6 users have (finally!) dropped to about 0, but we certainly have tons of IE7 users, and I'll have to look up versions of Windows they're using before we try it again.
We work with hospitals whose IT departments who need to control changes to their computing environments extremely carefully, and upgrades are unfortunately quite expensive and difficult for these kinds of environments.
Either way, Google has made it pretty clear that they want at least SHA-2 certificates, which, so long as they call it out in address bars, warning interstitials, and make noise about SERP impact, means that this is the way things are going.
Having a certificate with SHA2 will not save you. A client under attack will not even see it. The only thing that helps is stop accepting SHA1 certificates (and especially SHA1 intermediate CAs) globally. All this stuff about accepting short lived certificates is only a publicity stunt by Google to raise awareness about the issue (an attacker can forge a certificate with any expiry time she wishes).
This is different than the situation with MD5, where the components needed for a successful attack were known to the literature, and the real work was (a) scaling the attack so that it could perform within the time windows needed to forge a TLS certificate and (b) putting all the pieces together.
(But see upthread with 'pbsd, who is one of those people on HN who knows the subject much better than me).
The problem now is even you established HTTPS connection, the weak SHA-1 encryption will not protect you.
s/the weak SHA-1 encryption/the weakened SHA-1 hash used to verify the certificate that's used to authenticate the encrypted connection/
I've been looking for a CA who will provide an API to send the cert request, an easy way to prove the domain ownership which doesn't involve SMTP, and the signed cert handed straight back from the API, but haven't found it.
So far the most I've been able to streamline my certificate requests is to automate generating the CSR, skip setting the MX record, just bind SMTP to www.domain.com, get the validation email at 'email@example.com' and auto-forward to my actual email address... so it's mostly automated, but I still have to copy/paste the cert request string into the CA's webform, click the 'Approve' link in the forwarded DV mail, and then copy/paste the final cert from inside email back to the shell where it can finish the import.
Right now it's just a command line client, but a public API is in the works. And this week we'll be announcing a solution to the cert rotation problem (basically, you'll be able to drive your renewals from cron - it's going to be really cool). You might want to follow @sslmate on Twitter - this is just the beginning of some very exciting stuff for automating SSL cert deployment. Also feel free to email me (address is in my profile).
Sadly, we're still SHA-1 only, because that's all that our certificate authority (RapidSSL) supports at the moment. On the other hand, once we make renewals dead simple, you can just buy 1 year certs and it won't be a big deal upgrading to SHA-2 in a year's time. (After all, even Google is still using SHA-1, but they can easily switch thanks to their 3 month certs and well-oiled cert deployment machinery.)
The /link API is interesting, versus generating a token on your site through the UI. You might want to consider allowing an explicit $$ limit on /buy, since you store the api-key in the clear (albeit in a config file set to 0600).
It looks like you still rely on being able to receive an email on the domain and click an approval link, though. I'm sure this is a RapidSSL requirement, but it makes full automation more complex (certainly not impossible).
Unfortunately it does rely on being able to receive an email, as this is a requirement of virtually all certificate authorities, though the email address in the whois record is also an option (at least for TLDs which list the email in whois). I have some ideas to make this easier for users who don't otherwise receive mail at their domains, such as by letting them point their MX record at sslmate.
A configurable $ limit on /buy is a very good idea; also I should make it possible for users to use sslmate without permanently storing their API credentials on the filesystem.
Did you speak with them about this yet?
I know crypto is not to be taken lightly, and I'm glad people would rather be safe than sorry, and I'll avoid SHA-1 in my own personal security use (`sha256sum` is sha-2 right?). I'm just curious.
No collisions published to the public doesn't mean no collisions have been found.
As the article says; “we should assume that the worst vulnerabilities go undisclosed.”
Some government agency MITM'ing major social sites or email providers would be rather possible at that cost.
I'm a developer, but I'm not responsible for SSL cert acquisition.
The ONLY way I can get the people responsible for that to stop using SHA-1, is to tell them that user's browsers are sending a warning/error message on it.
I will eagerly await Chrome doing that.
I don't think this is inherently a problem with the CA model. Rather it's what you'd expect in a competitive market that is basically selling a binary commodity product (a padlock icon), given textbook economics.
The Chrome team are right to push this along, but I do have some sympathy for the CA's here too. I read the whole discussion and it's pretty clear that there was some epic miscommunication going on here. Notably Google thinks that removing the padlock icon is not "deprecation" according to the timetable Microsoft established, but all the people buying certs disagree; that's why they're doing it.
No improvement? How about improvement in security?
> If it has the same SHA1, it means that when we receive the object from the other end, we will not overwrite the object we already have.
> So what happens is that if we ever see a collision, the "earlier" object in any particular repository will always end up overriding. But note that "earlier" is obviously per-repository, in the sense that the git object network generates a DAG that is not fully ordered, so while different repositories will agree about what is "earlier" in the case of direct ancestry, if the object came through separate and not directly related branches, two different repos may obviously have gotten the two objects in different order.
> However, the "earlier will override" is very much what you want from a security standpoint: remember that the git model is that you should primarily trust only your own repository.
> So if you do a "git pull", the new incoming objects are by definition less trustworthy than the objects you already have, and as such it would be wrong to allow a new object to replace an old one.
> in this case, the collision is entirely a non-issue: you'll get a "bad" repository that is different from what the attacker intended, but since you'll never actually use his colliding object, it's literally no different from the attacker just not having found a collision at all, but just using the object you already had (ie it's 100% equivalent to the "trivial" collision of the identical file generating the same SHA1).
That being said git/mercurial and friends will have to transition to an other hashing algorithm sooner or later but it's not as urgent as web certificates security-wise.
A collision in commit hashes would mean you could no longer trust a signed tag, for example.
Look up the old stripe-ctf "gitcoin" challenge and you'll see it really is quite easy to meddle with commit hashes :)
I'm not familiar enough with the inner workings of git to know, but I imagine it would be a pain to juggle updating new commits from people working from a local copy of the repo - potentially your malfeasance would be detected quickly if you tried to calculate a diff between the old file and the new file from the local copy (which would presumably not be updated, given that the checksums match).
That said, if people are in the habit of signing their tags/commits using SHA-1, then that would be just as vulnerable as any other signing problem.
Git uses it to ensure that the data that comes out is exactly what went in (like a much better checksum than md5 or crc32 etc).
I don't know what the security implications are on the git side... I suppose an attacker could try to figure out how to change source code in a way that it preserves a commit log.
The Flame attack's math was invented by an internal government cryptographic think tank. And still had to leverage massive computational power, just not in the order of 100's of millions.
The idea a rogue group who have access to (both of) these resources is slightly idiotic. It would be far easier for them to attack RSA directly if you had 10's of millions of dollars of computers. There are a lot of 1024 bit certs you could pick off for easy profit.
The computation power needed to break SHA-1 is higher then attacking RSA. So if you are financially motivated attacking RSA has a higher ROI.
Flame like Stuxnet are state of the art. Highly funded state of the art. A lot of security researchers look at these things and simply say, "Are you shitting me?!"
Call me optimistic but I highly doubt we'll be uncovering a new stuxnet every single year.
You better back that assertion with something. Even in the worst possible case (for SHA-1), it seems to me that SHA-1 is cheaper to collide than it is to factor a general 1024-bit integer.
@konklone, have you followed the CAB Forum's mailing on this topic? Its the most I've seen them argue in well over a year.
The attack vector for TLS certificates is the (asymmetric) signature which signs a plain SHA1 hash message. I'm still disappointed that after all the experiences they have had with MD5 they haven't yet started to randomize the hash in the signature (sign SHA1(random||message) instead of SHA1(message) ) which would boost the signature security to that of HMAC.
Anyone know if they're planning to upgrade to SHA-2?
1. Reissue our SHA-1 based certs to meet the deadlines specified by Chrome so that no customer sees a warning in Chrome.
2. In the future, we will also have an automatic fallback system so that for poor clients (that only support SHA-1) we are able to dynamically provide 'old' certificates. For up to date clients we will not use SHA-1 at all.
(For anyone else, here's the link:
Windows Server 2003 will require a couple manual hotfixes to get SHA2 support.
... and there it goes, any credibility I would give the author. There's dumbing down the content for a non-technical audience, and there's not understanding.
1. Hashes don't "sign" things (not directly anyway)
2. Hashes aren't unique in theory or practice (using a 256-bit hash on every 257-bit number will generate 2^256 collisions by the pigeonhole principle).
Suppose every person generates 1 billion files a second * 7 billion people * 1,000 years = ~3x10 ^ 28 call it 10^29. For a collusion among non identical files using a good 256 bit hash you get ~1/(2^256) * (10^29) * (10^29) = ~1/(2^198).
Or 1 chance in ~4 * 10 ^ 59 of finding even one collision.
That would allow non-colliding SHA-1 certificates function as usual and prevent millions of people from major headaches related to speedy certificate migration.
> Walker's estimate suggested then that a SHA-1 collision would cost $2M in 2012, $700K in 2015, $173K in 2018, and $43K in 2021.
If you adjust those cost estimates for the fact that a second pre-image is needed they look more something like this:
An SHA-1 second pre-image attack (needed to e.g. impersonate an SSL protected website) would likely cost about 10^26 USD in 2021... By comparison world GDP is only about 10^14 USD.
Better safe than sorry though. :)
Basically, by 2015, Google will update Chrome to show websites with HTTPS certificates using SHA-1 as "insecure." The level of insecurity shown will get more severe over time, as well as being based on when your certificate expires. Here is the full link for details: