Hacker News new | comments | show | ask | jobs | submit login
The Signal protocol for Node and browsers (berkeley.edu)
215 points by ifelsehow on Dec 15, 2016 | hide | past | web | favorite | 29 comments



> At the end of the day, window.crypto can be absolutely anything. ... If you must run the signal protocol in-browser, run it in Electron, or as a Chrome app ... At the end of the day, window.crypto can be absolutely anything. If we can bundle all primitives with the rest of the application code, we can verify the integrity of that one JS bundle

If the attacker is running code within the same javascript context, within the browser's process, or within the user's operating system kernel... then you're hosed. Anything can be anything. Other javascript within the same context could redefine global functions, intercept objects passed through them, and mutate function references in your JS bundle. Or it could just log the DOM! A browser plugin or a kernel rootkit can keylog the user. The only defense an application has against the user's own machine being compromised is obfuscation, and that's a losing battle.

There are real issues with doing cryptography with users' keys in web pages, but it's not "their machine might be compromised" (and Electron doesn't solve that anyway). Even if the page javascript correctly stores user keys in localStorage where the server can't see them, nothing stops the server from serving you some backdoored javascript tomorrow which silently uploads your localStorage to the server. This might be correctly solvable with ServiceWorkers, though you'd want users to have some way to verify that they have the correct and peer-reviewed ServiceWorker source running. The easiest way to do that would be some kind of local application or browser plugin, but then at that point that you've involved a local application you've missed some of the original goal at keeping it all in a browser, and it would probably be easier for everyone involved if the crypto just happened in the local application to begin with.

Interestingly, some `window.crypto` functions actually solve some of the problems with running cryptography in web pages. You can create and use a crypto key that is handled by the browser and never has its key material exposed to javascript. Even if an attacker injects javascript into the page or the server serves malicious javascript the next day, there's no way to steal the key material. `window.crypto` can effectively provide a virtual HSM from the web page's perspective.


I've recently done some research in this direction and was kind of startled that there appears to be practically no interest in how to secure (for example "cryptomessenger"ish) SPAs. There are some things that can be done to compartementalize things a bit (eg. web workers), in the hope that it might mitigate next days' browser bugs, but at the end of the day everyone seems to put a huge amount of trust into a runtime environment of startling and ever increasing complexity and thus a security track record that's mediocre across all vendors at best.

Further adding to it are untethered, unchecked extensions in eg. Firefox. Every extension in Firefox is essentially running with full "browser root" privileges. Chrome at least has a privilege system that tries to avoid allowing every extension to attach a debugger to anything.

(I was also surprised that "how do i run that untrusted JS in my JS context 'safely'?" is a question asked and answered many times)


Two notes:

2) There is interest and progress on securing web applications: - For early research papers look into 'Privilege Separation in HTML5 Applications' by Devdatta Akhawe et al. <https://www.usenix.org/system/files/conference/usenixsecurit.... - For more practical concerns, see the stuff coming out of the W3C WebAppSec Working Group (CSP, Suborigins, etc.). - For Sandboxing/Compartmentalization of code, see the Realms proposal coming in to a future version of ECMAScript (JavaScript): <https://github.com/caridy/proposal-realms>

2) Firefox new-style extensions (WebExtensions) are in fact least-privilege.


HSM? What do you mean? What does it stand for? Heisenberg state machine?

How can the window.crypto API create and use a crypto key that's handled by the browser?


>HSM

Hardware Security Module. It's a piece of hardware that goes in a server and manages encryption keys for the server without ever exposing the keys to the server. Usually they're made so that it's infeasible to extract the keys even with access to the hardware: they're made hard to open, covered in epoxy, they erase their keys if the case is breached, etc.

>How can the window.crypto API create and use a crypto key that's handled by the browser?

There's a function for creating a key and you get an opaque object back out. You can pass the key to encrypt and decrypt functions. You can save the key object into IndexedDb, but you're still just dealing with an opaque object reference. There is an export key function, but it only works if the key was created with the "extractable" flag turned on.


The function is `window.crypto.subtle.generateKey`JavaScript can't access the generated key unless you export it using `window.crypto.subtle.exportKey`.

If anyone is interested in the WebCrypto API I recommend this talk by Charles Engelke: https://www.youtube.com/watch?v=D2kEViWKUh0


Hardware Security Module.

> How can the window.crypto API create and use a crypto key that's handled by the browser?

By setting extractable to false when creating the key, see https://www.w3.org/TR/WebCryptoAPI/#dfn-CryptoKey-extractabl...


Olm, the ratcheting algorithm used by matrix.org https://matrix.org/git/olm/about/ is also worth mentioning.


I love that hack:

module.exports = Module;

// Do not recurse into module and waste all day Module.inspect = function() { return '[Module]' }

from https://github.com/fabiosantoscode/require-emscripten/blob/m...


> Where TLS can provide an encrypted channel for convincing the user they are talking to the right server, Signal can convince the server it is talking to the right user - and that it is the same user that we spoke to last time.

Same thing can be done with TLS. It's called session resumption.


In a weak sense. In Signal, every single volley (in the Diffie-Hellman ratchet, at least) is testing the validity of the parties and cycling the ephemerality. Thus the newer notion of "key continuity", that is anchored all the way back to the initial, authenticating handshake. TLS has only the lesser resumptive property that the server got back what it sent some time ago, so this is probably the same thing -- but one can imagine lots of opportunity to monkey.


The ratchet is just refreshing the ephemerality. But do we need to refresh the keys for every messages? Personally I don't think so.


One necessarily implies the other.


> At the end of the day, window.crypto can be absolutely anything.

Would it be possible to toString() the function and check if it is a native function before execution? Seems like a simple solution but perhaps I'm missing something? I guess in theory the toString() method could also be hijacked. :-/


If you wanted to test for a native function, I think you could probably do something like:

    (function() {}).__proto__.toString.apply(window.crypto.getRandomValues)
which grabs the toString function off the Function prototype without relying on explicit/modifiable globals.

However I'm not sure if the testing for a native method idea works in general (it might be possible to say something like `window.crypto.getRandomBytes = Array.prototype.slice`, which would show up as a native function, but leave the original, likely 0, bytes in the input array). This might still be okay, because in chrome that shows up as "function slice() { [native code] }" instead of "function getRandomValues() { [native code] }", but it might not; I'm not sure I have the appropriate js/security background to say.


Cute idea, but thats not secure. You can edit the function prototype object to return anything you want:

    (function() {}).__proto__.toString = () => "Hi!"
All functions use same __proto__ object (including functions that haven't been written yet), and it can be edited from anywhere in your program. (Tested in chrome 54).

At a meta level, if you're trying to run trusted code in a JS environment that has some untrusted code in it too, you're going to have a bad time. The same is true in native programs by the way - you can't protect your program from a malicious library you're running in process.

The right way to solve this is to stop sharing a JS environment with libraries you don't trust. I don't know how you can protect yourself from malicious extensions, but you can stop pulling in a kitchen sink of JS libraries by being super selective about what you pull in from NPM. (Which you really should be doing anyway.)


> The right way to solve this is to stop sharing a JS environment with libraries you don't trust. I don't know how you can protect yourself from malicious extensions, but you can stop pulling in a kitchen sink of JS libraries by being super selective about what you pull in from NPM. (Which you really should be doing anyway.)

Well that's just the thing; it's far more likely that a user would encounter either a malicious script on the web, a virus that modifies the browser environment, or a browser that doesn't implement the Crypto API. Relying on the Crypto API for security is irresponsible in a production environment.


>it's far more likely that a user would encounter either a malicious script on the web

If it's a script on a different website (and no privilege-escalating-zeroday is involved), it doesn't matter.

If their computer does get a virus, then it may just keylog everything. If it does hook into a browser, it'll probably be made to log interesting plaintext bits straight out of the DOM before targeting the crypto API. If a virus is targeting users of a specific website and is able to inject code into a browser and fully control the environment that the website's code runs in, then it doesn't need to rely on the website using the crypto API to extract data from it. If the site keeps the key in localStorage, then any code running in that context could read from there too. If the site prompts the users for the password encrypting the key, then any code running in that context could read the password from DOM as it's entered, or prompt the user again. If the site's code is known to stick the key into a 256-byte array, then depending on the browser and type of attack then it could wrap the array constructor and log whenever it sees a 256-byte array get made.

The crypto API actually provides a good defense from some types of attacks. It allows you to create a crypto key that is handled by the browser and never has its key material exposed to page javascript.


You're right; upvoted.

I'm surprised that none of these global objects are set to not-configurable/not-writeable :/.

And yes, the general case of running untrusted js code in the same environment your code isn't safe.


I think the article gets this wrong. `window.crypto` should be read-only


I find the following in Chrome:

> window.crypto.getRandomValues

// getRandomValues() { [native code] }

> window.crypto.getRandomValues = function () { return "aloha" }

> window.crypto.getRandomValues()

// "aloha"

Not sure if this is the case in all browsers. `window.crypto` certainly should be read-only.


In Chrome, window.crypto is read-only:

> window.crypto

< Crypto {subtle: SubtleCrypto}

> window.crypto = "hi!"

< "hi!"

> window.crypto

< Crypto {subtle: SubtleCrypto}

But not anything underneath, including getRandomValues(), as you write. A recent issue about this [1] on the WebCrypto spec itself was closed with 'wontfix' because in their view, polyfilling web APIs is a common and accepted practice.

[1] https://github.com/w3c/webcrypto/issues/107


Shouldn't all pollyfills check only override the functions if they're missing?


What if it's present but the implementation is incomplete?


E.g. an extra optional parameter.


https://www.w3.org/Bugs/Public/show_bug.cgi?id=25345

It looks like they intentionally don't do that in order to prevent the illusion of security.


See "Building web applications on top of encrypted data using Mylar" https://people.csail.mit.edu/nickolai/papers/popa-mylar-2016...

Mylar ensures that client-side application code is authentic, even if the server is malicious.


I was curious what that meant with "even if the server is malicious." Turns out, in short, the Mylar paper creates a distinction between the "web site owner," and "the server operator."

It shouldn't need to be stated, but not all people believe that is always an important distinction.

Mylar places complete trust in the application developer, whereas an important aspect of other security software is that you don't need to trust the developer: if one version of the software is audited, you can stay on that version; or the other can be examined.

Mylar doesn't make the web any less of an ephemeral and invisible black-box execution environment, where code goes in and vanishes without earlier proof of existence.

It sounds like I'm panning it, but really I think the idea is sound: It makes the lives of non-affiliated/non-government attackers more difficult. I'd use it with my bank website. Just not for highly sensitive communications.


You can hijack toString, sure.

You can also hijack other things. https://www.w3.org/Bugs/Public/show_bug.cgi?id=25345#c5 is a good example of how this can play out in an actual example of code using getRandomBytes and then doing something with the return value.

As I said there, the only principled solution here is something like Caja, where you freeze all the primordials up front and then run all the maybe-untrusted code in sandbox-like setups. Well, or not having XSS injection, of course...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: