From the two quick reads I've given it, not a fan.
On the plus side: building this stuff directly into the browser image may prevent people from deploying Javascript Cryptography, which is unsafe in every environment. Systems built on the Web Cryptography bindings are at least anchored to trusted implementations of algorithms already resident in the browser.
On the minus side:
* These interfaces may have the opposite effect. Mass-market websites must support all popular browser versions. For the foreseeable future, we won't have a market in which the majority of browsers implement something like this. Unfortunately, the web crypto API is something that is very easy to duplicated "cosmetically" in Javascript. So the net effect of this in the medium term is probably negative. That said, I also understand that we've got to rip off the band-aid sometime.
* The interface is low-level. I appreciate that an API like this naturally wants to be low-level, to maximize the number of things you can build on top of it. But most of the things you could build on these particular building blocks will have subtle vulnerabilities. Not only that, but the interface doesn't appear to provide even the "envelope" abstraction that other low-level libraries present.
* So much of this design is low-level that enough of the security of the system is left to content-controlled Javascript, so many (perhaps most) of the systems that rely on this library are going to have exactly the same problem as they do now.
* The low-level building blocks they provide are incoherent. If you wanted to just expose all the little knobs and wires that it could take to implement pre-existing (and broken) crypto standards, then an interface which pretends that AES-CBC and AES-GCM are two different ways to do the same thing makes some sense; some systems need GCM, and some need CBC, and you don't have to give much thought to the difference. But this library doesn't even do that; for instance, it doesn't look like callers control the nonce in AES-CTR. A good way to sum this up: if this API succeeds in the market, there are going to be deployed, widely used systems that use RSA for bulk encryption.
If I was elected Archduke of Browser Crypto, the first standard I would propose would be a simple authenticated AES-encrypted envelope format; something like, PBKFD2-keyed HMAC-SHA256-authenticated AES-CBC messages. Something impossible to screw up, because Javascript authors aren't asked to make design decisions.
I believe you are suggesting that we make a more foolproof API like keyczar. The original idea of DOMCrypt was along those lines as well. I don't think we will be able to avoid specifying and shipping the low-level API at this point. It would be nice to define such a foolproof API on top of the low-level API, prototype it with a JS implementation, get feedback on how useful it is (i.e. whether it is too limited for the use cases people have), and then push browser makers to provide this safer API natively. I think the main concern with a keyczar-like API is that we won't be able to create one that significant numbers of websites would actually use.
Can I ask what the point of enabling websites to create vulnerable cryptosystems is? I don't mean that snarkily and I'm not trying to make a point. What are the applications that would not be possible with a high-level "envelope" interface that are key use cases for the Web Crypto API?
> Can I ask what the point of enabling websites to create vulnerable cryptosystems is?
You yourself noted that Javascript, not this API, is the enabler of vulnerable cryptosystems in web apps.
Also, sometimes you want to implement a vulnerable cryptosystem. For example, imagine a PDF viewer or a Microsoft Word document viewer that wants to be able to view password-protected (encrypted) documents with acceptable performance.
> What are the applications that would not be possible with a high-level "envelope" interface that are key use cases for the Web Crypto API?
I have asked this question quite a few times. The responses are that the model you and I have in mind doesn't fit the model that certain applications want/need. Also, the people working on this are people that are used to programming in C with CAPI, PKCS#11, etc. So, it shouldn't be surprising that the API will look a lot like a Javascript transliteration of CAPI, PKCS#11, etc. in the end. Finally, I think some people perceive it as being easier to standardized "Javascript PKCS#11" than it would be to standardize several high-level "foolproof" APIs.
Please also see my other response(s) in this thread.
If you want to implement a vulnerable cryptosystem, do it in pure Javascript. Concessions to the broken cryptography of the last 20 years have no place in a forward-looking browser security standard.
Similarly: the industry has grown accustomed to interfaces like PKCS#11, and indeed from looking at the threads on the W3C Web Crypto mailing list, it's clear that PKCS#11 was a major influence on this proposal. But the industry has manifestly failed to enable the deployment of sound cryptosystems for 20+ years. Cryptographers have come up with new interfaces, like Keyczar and Nacl. Why aren't forward-looking standards adopting those instead of perpetuating mistakes from the '90s?
Ordinarily, you wouldn't want the perfect to be the enemy of the good. But this is security and you don't get to say that here.
As a developer and someone who has spent the last 8+ years dealing with the security challenges of other people's web software, I think the goal of interop with existing crypto is counterfeit. It would be extraordinarily difficult for a browser standard to capture the gory details of every widely deployed cryptosystem. You don't have a shot at "very good" interop.
Meanwhile, if we could just solve the problem of getting applications a trusted cryptographically secure binding between client and client-data-stored-on-server, there's a lot of interop problems we could solve "out of band". Part of what makes interop hard is getting the details of CTR counter byte order right, sure. But another part of what makes interop hard today is just providing the baseline level of security required to make the trust model work.
More people need the trust model to work than need support for backwards crypto systems. That's the problem that should get tackled first.
> I think the main concern with a keyczar-like API is that we won't be able to create one that significant numbers of websites would actually use.
I'm personally much more concerned about websites using an API with significant faults than fewer websites using a solid API. The fact that an MITM or XSS can completely undermine this makes it equivalent (outside of the secure RNG -- a definite good thing) of doing all your crypto in JS. As long as the server controls how the building blocks are put together, what keys are used and when, etc, the trust model is broken.
> The fact that an MITM or XSS can completely undermine this
MITM and XSS are problems with many Web APIs. There are already countermeasures for MITM (e.g. TLS) and XSS (e.g. CSP). They are too difficult to use, and we need to make them easier to use. That is a parallel effort.
> outside of the secure RNG -- a definite good thing
The cryptographic PRNG is actually the enabler of pure JS cryptography. If all browsers provide only the cryptographic PRNG, then we'll see people implementing crypto algorithms in JS for sure. I would rather we provide them with (native code) APIs that guarantee maximal protection against side-channel attacks.
> As long as the server controls how the building blocks are put together, what keys are used and when, etc, the trust model is broken.
Even if these crypto APIs are less useful for normal web page use, I expect them to be an improvement for (packaged) installable web apps. We are planning on packaged web apps being protected by digital signatures (at least in Firefox OS) and having a default CSP that prevent the execution of any external scripts.
Still, of course it would still be up to the application developer to do all the other things that are necessary for their application to be secure--whether it is using cryptography or not.
But you haven't protected those applications from Javascript attacks, because you've left virtually all of the glue to bind HMAC to AES (if the developer knows to do that) or to set the parameters for AES-CBC or AES-CTR (if the developer knows to do that) and to perform secure key exchange and avoid key reuse (if the developer knows to do that) to higher levels, like Javascript.
Avoiding attacks on Javascript crypto is a very strong argument for a browser crypto standard --- but that standard has to be a high-level envelope interface for the argument to hold water.
I think the main concern with a keyczar-like API is that we won't be able to create one that significant numbers of websites would actually use.
Why not try to create one that significant number of users will use? Stuff like automatic verification of files using a hash or signing and verifying GPG signatures on web-based message boards would be tremendously useful and don't require much support on the website side.
I recommend people to write up proposals for such APIs and submit them to the W3C working group. It should be simple to specify them because you should literally be able to specify them directly in code based on the low-level API.
We've discussed much of this before on HN, and I am slowly but surely starting to see exactly what you mean. Getting everything in crypto just right is a hard problem, especially with the disparity between different crypto providers and how it should be implemented to be secure.
The biggest issue I have is that unless low level primitives are available it is hard to work together with various other libraries/standards. Getting Java and OpenSSL based crypto to play nice is already harder than it should be, I can't imagine what a mess it would be to try to make it work with yet another standard. And at the same time if those low-level primitives are available people are going to use them instead of the higher-level functions because they have certain specific requirements.
---
Any specific reason as to why AES-CBC? If seeking is required wouldn't AES-CTR be a better choice? Or are you making assumptions that the payload size is small enough that decryption of the whole object is feasible?
Seeking is a great example of something that you would think you'd want in a crypto API that has pitfalls when you implement it in a real cryptosystem. And it looks to me like this proposal wants to support seeking, and doesn't account for those pitfalls.
And I think you're right: I think the interface proposed here is idiosyncratic enough that it will be difficult to make it interoperable, but that idiosyncrasy doesn't buy any additional security.
> Any specific reason as to why AES-CBC? If seeking is required wouldn't AES-CTR be a better choice? Or are you making assumptions that the payload size is small enough that decryption of the whole object is feasible?
Choosing nonces is hard. For a given key, nonce reuse with AES-CBC fails leaking plaintext equality of the first blocks of the two messages. This is bad. AES-CTR usually fails by leaking the entire two plaintexts. This is catastrophic.
Cobbling together AEAD schemes out of AES-unauthenticated-anything and HMAC-anything is a poor idea, and unnecessary. AES-GCM and AES-CCM are here, standard and performant. Use them.
Totally agreed on every level. If you want to add security, you do it from the high level; exposing low-level building blocks is a recipe for disaster.
While I think this could be a good thing in theory, exposing such a low level crypto interface is likely to end in pain. How many people can combine the building blocks into something secure? And MITM/XSS still give up the keys to the castle.
I'd much rather see a simple high-level interface to crypto in the browser.
The plan is to first nail down this low-level API, and then specify a high-level interface wrapping the low-level one that makes it much easier for most developers to get things right.
I see talk in those threads about how the "low-level API" is logistically the right move, but not a lot of talk about how it's right in an engineering sense. Is it possible that the cart is pulling the horse here?
I don't think that's enough -- in fact, I think it's worse. The low level API inherently has a broken trust model, and using that as a stopgap is going to be worse for security than it not existing at all. If anything, the low level API should be tabled in favor of the high level.
This looks like it could be pretty useful. The main argument against crypto in the browser is that if you send crypto js over a non encrypted channel it's vulnerable to MITM, and if you're going over HTTPS you might as well just send it plaintext. If the browsers themselves implemented secure crypto it could be a boon to certain types of applications, for example S/MIME in webmail.
I don't see this adequately defending against MITMs or XSS. You can still hook all the crypto primitives, inject new keys, or pull out existing keys from the same origin.
This scares me; a lot of people are going to assume this is secure against attacks that it simply isn't there to defend against.
It could be useful if it was nailed to HTTPS. But it wouldn't be safe; it leaves too much to content-controlled Javascript.
When you've got HTTPS working, the threat model left to solve is, "can I trust the server not to fuck me with a subtly broken cryptosystem". This API doesn't address that problem; it admits to more broken cryptosystems than sound ones.
> That doesn't mean providing the primitives to implement bad crypto is responsible for the bad crypto.
In this case, it absolutely does. Not only will people most likely put them together incorrectly (which isn't the API's fault), but it doesn't provide any defense against cases where the browser is running code not intended by the server, e.g. MITM and XSS. As soon as the browser is running bad code, this is 99.9% equivalent to existing JS crypto (the exception being the secure RNG). The trust model here just doesn't hold up.
While the trust issue you raise is legitimate, it's not one we're trying to solve (or at least, not yet, and I hope not soon)
You may ask then, "Well, what's the point of this API if you're not going to solve it?"
The answer is that solution for the trust model issues are being addressed concurrently and, unquestionably, more adequately, in other W3C working groups, the WHATWG, the IETF, and other such standards bodies. For example, the development of Content Security Policy, the hopeful formation of the Sys Apps WG, offline apps that execute in their own origin, the proposed <browser> tag, the formalization of the things as basic as the web origin concept (RFC 6454), Web Intents as a service-agnostic IPC mechanism. There are a number of efforts going on to address and enhance the overall security and utility of the platform. Our WG is providing just one small part of the overall picture.
Every concern about malleability of the run-time applies equally so when you're not doing JS crypto - native or browser-mediated. Storing something in localStorage/IndexedDB? Well, now you've got an opportunity to turn transient XSS into persistent, stored XSS. Does that mean localStorage/IndexedDB are doomed to failure or fundamentally flawed for not addressing that? No. Are cookies complete and utter garbage due to all of their known issues? No (or at least, not /complete/ ;-)
I recognize that there are a variety of security decisions and trade-offs being presented in this API, decisions that will have to be made by site authors. It may be that this is completely unacceptable - and I would hope, by now, that people feeling that way would be sending mail to public-webcrypto-comments@w3.org saying that. However, for the use cases that have been presented and expressed, the trade-offs have been understood and are, thus far, acceptable, which is why this WG has continued the path its on and why the participants have, so far, believed in the utility of the API.
As far as algorithms go, our failure (thus far) to include DSA is hardly going to be the end of SSH (for which many still use DSA keys), just like our failure (thus far) to include ElGamal or MD5 are not going to mean the death of PGP/GPG. However, by not including them, it also means that there's no way to implement such applications, even if all other concerns were controlled for - and that would suggest our API is incomplete and a less-compelling alternative. Suggesting that failing to implement PKCS#1 v1.5 will finally mean the end of it is unrealistic.
So far, our stated goal has been to produce a low-level API that has applicability for a number of situations, as captured both in the use cases of the draft and in the companion use cases wiki. The core functionality starts with "Secure key store. Secure RNG". In order to support "secure key store," it's necessary to define what you can do with these "secure" keys, hence, the specification of certain operations and algorithms.
We're not (thus far) attempting to create an enveloping format - that's something that the IETF JOSE WG is doing, and that various groups and standards ranging from XML DSig to CMS to S/MIME have done or tried to do.
We're not (thus far) attempting to create the one-true-perfectly-safe-full-of-cryptographic-kittens-and-puppies "box" and "unbox" - it's interesting, unquestionably useful, but getting two crypto-geeks, let alone a hundred, to agree on what that operation is composed of is a sisyphean effort. Should it be Sals20 or AES-GCM? Why not Blowfish? Monkey knife-fighting appears more civilized then some of those crypto-political discussions - and makes more progress to boot.
By providing the low-level API, application developers can make an informed decision on what that "box" looks like to them, or relying on cryptographically skilled developers to produce libraries (ala KeyCzar, ala NaCl) to do that for them. Yes, it also means that uninformed developers can start smashing things together and leading to wonderfully painful crypto-explosions. However, you don't see WebGL being lamented for its lack of built-in VRML support (I hope...), and neither do I think this API should include the kitchen sink, bathtub, robe, and matching slippers in order to be useful and applicable for many web developers.
Yes, screwdrivers are wonderfully useful tools, and it's fairly hard to hurt yourself with one. But sometimes you really need a hammer - or a chainsaw, or a level, or a drill - in order to do the right job. This API is merely a toolbox - dangers and all - not the One True Solution to all the Bad Crypto.
Here are two things you say in the same paragraph:
By providing the low-level API, application developers
can make an informed decision on what that "box" looks
like to them, or relying on cryptographically skilled
developers to produce libraries (ala KeyCzar, ala NaCl)
to do that for them.
and
... neither do I think this API should include the
kitchen sink, bathtub, robe, and matching slippers
in order to be useful and applicable for many web
developers.
In other words, you're claiming that some few skilled cryptographers will produce a safe, high-level API (eventually) and the low-level API is already useful to many web developers. The implication is that many web developers can do something that is both useful and safe with only the low-level API.
My experience has shown that to be the exception much more than the rule. Put another way -- I've never reviewed a system implemented by developers experienced with cryptography that hasn't had at least small flaws. The carnage created by validating the practice of novelty crypto protocol design to many developers may be astounding.
I think you're too sanguine about the potential damage to users by this API. That's the problem with externalities. It's not the developers who hurt themselves with this "chainsaw", it's the end-users who are hurt by those developers.
In other words, you're claiming that some few skilled
cryptographers will produce a safe, high-level API
(eventually) and the low-level API is already useful to many
web developers. The implication is that many web
developers can do something that is both useful and safe
with only the low-level API.
I was not trying to suggest that only the low-level API is sufficient. I think there's no question that for some segment of potential use cases, they would be unquestionably better served if their hand was held all the way through, that that the crypto protocol and exchange was fully dictated for them. I think that's true for just about any use of crypto.
However, if we only provide a so-called high-level API (what I've been terming as box/unbox, and which is conceptually similar to what Keyczar, Nacl, or even JOSE offer), then I think there's even an larger number of known and possible use cases that simply cannot be implemented. That's why I believe it's more useful to provide a low-level API.
I think perhaps there are unrealistic expectations on what this is trying to do or who this is trying to serve. It's not trying to save the world from bad crypto. It's attempting to bring to the web platform what has existed in native code for 3+ decades - pitfalls and all.
I'd also be concerned about the argument going the other way - if we only provided a high-level protocol & API, then developers would need to implement whatever protocol specified atop the existing low-level primitives (OpenSSL, PKCS#11, CNG, etc). There, just as well, the risk is that people will get it wrong - or that 'others' will write libraries to do it right. You can't escape from the fact that crypto is hard, and that there is always going to be a segment of developers who will simply not get it right.
You guys are working on something important, for better or worse.
Have you ever arranged to sit down with people who have spent time breaking cryptosystems? You're talking to a couple here (Nate is far more qualified than I am), and I saw on the mailing list you bounced the ball back and forth with Zooko --- who, while more a builder than a breaker, is at least pretty cognizant of real-world attacks.
You also had the RUB critique of the API, which saw no response on the mailing list.
Is this API something that is likely to ACTUALLY HAPPEN? If it is, why can't we just arrange to get people into a room to talk to them about what goes wrong when devs get tools like this?
The most dispiriting thing I see happening here is parties talking past each other. Let's force the issue.
As far as "arrange to get people in a room" - that's exactly what we're trying to do, and exactly why we solicit feedback. While the Hacker News engagement is great, and so is Twitter, I suspect that I'm missing 90% of the discussion because nobody is sending mail to where we asked - public-webcrypto-comments@w3.org - and so no one in the WG is having a chance to engage and discuss.
Again, we're not trying to design a cryptosystem here, as far as a "secure messaging protocol" goes, least of all because no one who submitted feedback during the months of time spent when it was a W3C Community Group and chartering into an actual WG did anyone actually give a use case where their needs would be met with such an API.
Definitely, the best place for concern and criticism will be the mailing list, and that's the whole point of the wide canvas and call for feedback.
My main complaint was your implication that large numbers of JS developers ("many") would benefit from this low-level crypto API in the browser. I think it will have the opposite effect, leading to a million differently-broken encrypted message protocols, encouraged by the browser offering just enough rope to shoot the users in the foot.
You don't really address this issue, but point to what you claim is the greater good: more freedom to implement any protocol the dev pleases. In your opinion, the gain from this is worth the cost of a few million compromised users (story out yesterday: Pandora doesn't manage keys for encrypting cookies properly due to a developer choice).
The only problem with this argument is that there is no limit on browser JS developers' freedom today. They can implement anything they want, plugging in AES ECB blocks wherever they want.
For example, consider Meebo, which sent a JS RSA implementation plus the key itself as a substitute for SSL. There was nothing wrong with their JS implementation of raw RSA (as far as I remember), but the whole problem was that crypto doesn't work if the key and cipher implementation are both modifiable by an attacker.
I find it funny that you agree with me in the end:
You can't escape from the fact that crypto is hard, and
that there is always going to be a segment of developers
who will simply not get it right.
The goal is not to prevent every failure, but reduce the number of devs who will fail. The way to do this is to offer high-level APIs that reduce the number of such victims.
Currently, the W3 API will victimize millions of users via any clever developer who has read Applied Cryptography. With a high-level API, you reduce that set of devs to those who are determined to get it wrong (e.g., post private keys on their website today). I think you agree that's a smaller set (though how much, I don't know).
The W3 low-level API will encourage more flaws while offering no improvement over the current JS security model. The fact that you can do things like Meebo (but with higher performance and cred with managers that "it's the standard"), is a net loss of security, not gain. Too bad.
I'd like to see a declarative way to sign or encrypt parts of a document or form submission to protect them from tampering by the origin server. But js is under the control of the origin server, so I don't see this as a security advance over TLS, just a crutch for broken apps with no non-js mode using HTTP correctly.
Minimizing the number of trusted components that are under the control of Javascript is part of the point of the interface, but as Cody and I have pointed out (and obviously you as well), there's enough left to JS that the interface doesn't really solve that problem.
One thing I would probably do as Archduke, after ordering a 70% tax on cupcakes, is have the bindings only work on HTTPS connections.
> One thing I would probably do as Archduke, after ordering a 70% tax on cupcakes, is have the bindings only work on HTTPS connections.
I support the cupcake tax (we can discuss numbers later), but I think making it work only for HTTPS connections makes that seem like the biggest issue. I believe that XSS attacks, simply due to the ease of discovery and exploitation, are a much bigger issue here. Hijacking connections to a site (whether via MITM, DNS redirection, whatever) isn't easy, but XSS is downright trivial in most cases; it's definitely the biggest concern in my mind.
For what it's worth, while I object to your tax-and-spend approach with cupcakes, I do subscribe to your TLS-only newsletter for key-utilizing operations, and had proposed that already. Keyless operations are more of a gray area of policy rather than security. For example, should hashing require TLS? I think not, since there are non-crypto-but-still-useful applications of cryptographic hashes and random numbers (eg: name-based or PRNG-based UUIDs as described in RFC 4122). However, splitting the interface into TLS and HTTP segments is something that has its own issues.
It's possible that a compromise might be the "Secure Cookie" equivalent of specifying policy (HTTPS-only) when creating or importing keys, but I hope it doesn't come to that.
That said, I'm also a big proponent of requiring some sort of sane CSP settings to also narrow the scope of the API usage ( http://www.w3.org/2012/webcrypto/track/issues/21 ), but that remains an open issue.
I understand that now, after talking to David Dahl on twitter. That's extremely dispiriting. I wish you guys would think twice before caving on this issue, because while Netflix can benefit marginally from something like this, the Internet will be harmed to a much greater extent. There are a lot of site operators out there who are enthralled by mythologies about how expensive TLS needs to be and how easy it is to replace TLS.
For what it's worth, at Dahl's suggestion (and because I was off work sick yesterday), I read every thread on the mailing list archive and saw the design issues that were batted around on this. I now think the overall goal of this effort is unfortunate and the effort is star-crossed.
I have more respect for Eric Rescorla than anyone else I can think of working in standards, but interoperability with existing crypto is a terrible, terrible goal for a web crypto API. I'm not saying "no good can come of it", just that I can't think of any good that is likely to come of it and I can think of a lot of bad stuff.
> One thing I would probably do as Archduke, after ordering a 70% tax on cupcakes, is have the bindings only work on HTTPS connections.
I am very interested in this idea, not only regarding this API, but also regarding camera/microphone APIs, where a MITM could cause all kinds of severe privacy issues. I know the idea has been discussed specifically in the context of WebRTC (can an active attacker inject JS into a page to turn on your camera & microphone and the feeds back to them?), but it seems like it may have already been shot down there.
I would very much rather see higher-level crypto, especially crypto that could be used declaratively, without JavaScript. (Something similar to Enigmail, but for browsers.)
Heck, there isn't even a standard way to verify a hash on downloaded files.
For the record this is only the First Public Working Draft. A lot can and will change. We are looking for commentary to understand if we are on the right track
On the plus side: building this stuff directly into the browser image may prevent people from deploying Javascript Cryptography, which is unsafe in every environment. Systems built on the Web Cryptography bindings are at least anchored to trusted implementations of algorithms already resident in the browser.
On the minus side:
* These interfaces may have the opposite effect. Mass-market websites must support all popular browser versions. For the foreseeable future, we won't have a market in which the majority of browsers implement something like this. Unfortunately, the web crypto API is something that is very easy to duplicated "cosmetically" in Javascript. So the net effect of this in the medium term is probably negative. That said, I also understand that we've got to rip off the band-aid sometime.
* The interface is low-level. I appreciate that an API like this naturally wants to be low-level, to maximize the number of things you can build on top of it. But most of the things you could build on these particular building blocks will have subtle vulnerabilities. Not only that, but the interface doesn't appear to provide even the "envelope" abstraction that other low-level libraries present.
* So much of this design is low-level that enough of the security of the system is left to content-controlled Javascript, so many (perhaps most) of the systems that rely on this library are going to have exactly the same problem as they do now.
* The low-level building blocks they provide are incoherent. If you wanted to just expose all the little knobs and wires that it could take to implement pre-existing (and broken) crypto standards, then an interface which pretends that AES-CBC and AES-GCM are two different ways to do the same thing makes some sense; some systems need GCM, and some need CBC, and you don't have to give much thought to the difference. But this library doesn't even do that; for instance, it doesn't look like callers control the nonce in AES-CTR. A good way to sum this up: if this API succeeds in the market, there are going to be deployed, widely used systems that use RSA for bulk encryption.
If I was elected Archduke of Browser Crypto, the first standard I would propose would be a simple authenticated AES-encrypted envelope format; something like, PBKFD2-keyed HMAC-SHA256-authenticated AES-CBC messages. Something impossible to screw up, because Javascript authors aren't asked to make design decisions.