We are also developing a browser extension which will verify the Javascript loaded from the server. (Until then, an attacker who gained control of a server could tamper with the JS to steal a user's credentials the next time they log in.)
Baffling. If you're going to build a browser extension, put the crypto code in the extension. This still isn't a great solution, but it's better than trying to "verify the Javascript code". How exactly do you plan to do that? Surely you realize you can't just digest the JS asset files.
Yeah, I read your Matasano post, "Javascript crypto considered harmful", and I agree-- Javascript crypto is hard!
But it's not impossible.
The reason I chose that route is because I want to make it as easy as possible for users to try out and adopt. Just testing it out? No installation required.
I think that security is at least equal parts a technical problem and an adoption problem. The status quo is that nearly all email is searchable plaintext as far as strong adversaries are concernd. I want to fix that.
> If you're going to build a browser extension, put the crypto code in the extension.
The hard part is running known, trusted JS. Simply packaging the same HTML/CSS/JS as a browser extension was certainly my first impulse! The issue then is that you can never ship an update--users must proactively upgrade--unless you have an auto update mechanism for the extension (eg, Play Store) at which point you're again at the mercy of the server, no better than a web app.
Once you get trusted code running on the client, you can bootstrap. The system I envision is as follows: you, a power user, download the browser extension. You verify the signatures. There are multiple signatures--not just my own--so that no one party can release a tampered client. A set of independent, trusted organizations around the world must sign each release. Let's call them the Signature Committee.
The browser extension simply contains the public keys of each member of the signature committee, and a small bootstrap loader. When you run the browser extension, it gets the latest assets from the server, along with a signature file---and it checks that enough members of the signature committee have signed them.
You know have trusted code, but the developer can still--with the consent of the signature committee--push out updates.
I think that the Signature Committee concept has very cool properties. Currently, even security-critical client applications--for example, the Tor Browser Bundle--are usually only signed by a single organization--in that case, the Tor Foundation. Given that governments have shown themselves willing to coerce organizations to conspire against their users, and to do so in secret, we need something better. I envision a future where Scramble keeps improving continuously, but no one organization--not even any one country--can unilaterally ship an update.
It is impossible. The features required to make browser Javascript safe for crypto aren't even on the roadmaps of browser vendors.
Your reason for using browser Javascript for crypto --- here, Recurity's JS PGP implementation --- is the same as every other JS crypto project's reason: doing everything in the browser makes it easier for users to adopt your project. You are not the first person to point this out and you won't be the last.
The problem is, you acknowledge one side of this design ("it's super easy for users") and ignore the other ("it's fatal to security").
Here, you even acknowledge that there was a simple mechanism available to you that might have marginally improved security (packaging the whole application as a Chrome extension). But that made your life too hard! So you abandoned that idea and just made all the crypto downloadable from the server on HTML pages!
Perhaps I'm ignorant, but I've read your "JS crypto considered harmful post" and I disagree with the key points.
1. Browsers, at least the ones that matter, provide window.crypto (with a CSPRNG). This makes generating IV's for CBC "safe," makes generating RSA keys "safe," makes generating random keys "safe." Or am I missing something? Assuming the JS crypto code itself is actually sound, the PRNG seems to be the missing link...or at least, it was.
2. Packaging the entire app as an addon (assets, javascript, views, everything) means the app can be signed and verified. No in-app code updates. If you want an update, download the latest extension. The article assumes that people will do what OP is doing: downloading code and running it dynamically. I agree this is a no-no, even if verifying.
Is there some sort of attack vector I'm missing? I think it's fair to assume the browser is an attack vector, just like hardware is, just like the OS is, or whatever language VM you're running on, etc...but that seems tangential to the process of packaging/signing an entire javascript app and distributing it as an add-on. I also know about the unsafe memory stuff, but this just brings browser encryption down to the level of Python/Ruby/etc. I'm not dismissing this concern, I'm saying that if you're attacking javascript for having this vulnerability, you must also attack any GC language (which you do in the article).
How is in-browser encryption so different from any other java/python/etc app when packaged/signed? I'm not talking about replacing SSL, I'm talking about encrypting data via AES/blowfish in a browser extension with a key generated from a password, and if the data leaves the browser, using SSL on top of that.
What makes it impossible to do this and gain the same level of security as any other language/platform?
We are not talking about crypto done in browser extensions. We're talking about a project that knows about extensions, knows how to package their project as an extension, but refuses to do so, because that would cost them users, who generally don't want to install extensions.
Your partner, just upthread, explained why you tabled the idea of using a browser extension to house all the logic for your app. I think your argument is with them, not me.
If all of the Javascript code and application functionality is bundled into the add-on, it's trivial to avoid XSS. There's no "site" to script into via the URL, and rendering of dynamic elements can be done via a sandboxed iFrame, preventing any scripts from running within dynamic data. This is fairly basic security that any add-on developer should be aware of: http://developer.chrome.com/apps/sandboxingEval.html
"XSS isn't the only way either." That's about as illuminating as saying "something bad could happen."
No one is saying JavaScript or browser security is perfect, but if you actually know what you're doing, it can be done properly.
The original "JavaScript security is doomed" Matasano article is extremely out of date at this point, and yet people keep referring to it like it's gospel.
I don't like the article either, but you're wrong about it being "extremely out of date", and you'd have a very hard time defending your argument with evidence. Do try.
Right, like getting access to the DOM was ever a hard thing to do. I was specifically referring to web apps in that point, but because you insist, I'll just reference [1].
Another vector to get rogue JS into a user's browser is cache-poisoning, something the article also brings up.
Cache poisoning won't work if an extension loads all of its code from its own bundle. So I fail to see how this applies to an app that is fully self-contained within an extension (extensions themselves are signed, so it's not like you could MitM the extension bundle itself...)
> But that made your life too hard! So you abandoned that idea
We haven't abandoned the Chrome extension idea at all. We just think there's a better approach than "packaging the whole application as a Chrome extension". Doesn't sound like you read my reply.
You (a) don't have a Chrome extension that verifies anything right now, (b) are currently publishing the pure- browser- JS version of your application, and (c) envision a future in which a small number of "power users" can "safely" use the system while the majority of users, who you correctly note will not install your extension, will be at the mercy of attackers or, for that matter, your own server.
You're essentially asking to be the next Lavabit. You've added a verification component that probably won't work even for the verifiers. But that doesn't even matter, because the majority of your users won't even be doing that verification.
I respect your well-considered opinions most of the time, but here, you seem to not really be presenting anything constructive. "Javascript doesn't have the features needed to be secure" Which features? Enlighten us.
As for the chrome extension thing, the project clearly states that it is currently a proof of concept and is missing this crucial part which is currently being built. Somehow you try to turn this into an ad-hominem attack on the developer's supposed laziness.
How are you going to block dragnet surveillance if they can surveil the method of your delivering encrypted messages? NSA can already see, and modify, the JS used to create encrypted messages (because they've owned the HTTPS gateways), so they can see, and modify, the encrypted messages.
If they do this for many users, then those who don't have a secure environment (including the browser extension) are vulnerable, but it would be easy to detect that this is happening on a large scale -- those running the extension would be notified. If they don't do this for every user, then it wouldn't be effective dragnet surveillance, and Scramble would have a made a dent in dragnet ability.
Let's stipulate that your detection system, which does not currently exist even as a design document, actually works. By your own admission, only a small minority of users will actually install it. Now: a whistleblower in the USG, who is much more likely not to be a "power user" (if they were clueful they wouldn't be using this system to begin with) leaks a document from your service to a journalist and begins a conversation.
The FBI decides they'd like to read that conversation. They issue a court order requiring you to hand over your TLS private key, and use it to MITM your users to feed them corrupted JS code that discloses PGP private keys.
The small minority of users "detect" this change. But not quickly enough: the whistleblower has already logged in to check their mail, and has now disclosed their private key to the FBI.
What now? What's the feature you're going to build to keep that from happening?
Not only are you using Javascript crypto to provide this "service" to users, but you're not even forward secure.
Tptacek, whistle blowers level security should not be a use case for this extension , and should be advertised as part of it.
But decreasing the amount of dragnet surveillance by some extent sounds like a good thing to me. For example, many people wouldn't like the government to know mundane stuff about them, like having an abortion.This could help.
* BTW haven't moxie written about an asynchronous diffie hellman key exchange ? it could at least give PFS to this , so the private key would be much less useful ?
I think the disconnect here is, many of us can't imagine offering something with known technical security issues such as browser side JS encryption, and branding that "secure". Much less with all the unknown legal, operational, and techical security issues that may arise later in a project like this. As well, from what I read this project transfers your (encrypted with only a passphrase) private key over the internet, where it will be caught in the dragnet for decryption later.
I find it far more dangerous for someone to think they're secure against government level threats, than to know they're not. And if you're not worried about government level threats, gmail with MFA and pinned chrome certs is likely safer than this project. Of course, this is all just my humble opinion...
Its not a personal insult. This system just doesn't provide serious protection from "dragnet surveillance". Distributing keys and allowing the client to encrypt a message to another user ala GPG is a great idea...you just can't use server delivered code to do it in the traditional web server/browser way. The JavaScript is modifiable in transit.
Could you be more specific? I don't know who "everyone" is, or what a large scale is, or who is detecting what by whom. In any case, I don't see how anyone could detect real HTTPS MITM.
If the Scramble.io people ship ALL OF THE PROGRAM LOGIC IN THE EXTENSION, it could be secure. But if any program logic (javascript) they interpret is delivered to the user via HTTPS, it will not be secure.
In general, dragnet security will always be possible as long as you can do statistical analysis over both targets in the network (or the whole network), which the NSA has proven it can do.
We assume that the server is compromised, so the extension wouldn't interpret just any JS from the server, whether or not it was delivered by HTTPS. Our idea is to require a committee to review and sign the code, and the extension would only execute code signed by the committee in consensus. This is just as secure as shipping all of the program logic in the extension, except in the case where all the signing committee member keys get compromised, which is unlikely.
My point in the previous comment was that dragnet surveillance wouldn't work at all unless the client's code was compromised, but there isn't a good way for the NSA to compromise ALL OR MOST of the clients' code without it being detected by those users who use the extension. Remember the TorMail episode where malicious javascript was injected in the response? If some users had a Firefox extension that checked to make sure that all the JS code was signed by a committee, then they would have raised the flag and alerted everyone not to use TorMail.
So far, i've failed to see a reliable committee-signing trust system. Moxie's Convergence blows chunks all over my network connections in practice.
Committee depends on things like number of nodes in the network and integrity of the nodes, not to mention you can still do analysis on who was sending or receiving something at a particular time (which may not be enough to stand up in court, but it's enough for the NSA to know that Mike is talking to Jeff, or whomever).
At the end of the day, the best method currently available for clandestine activity on the internet is one-time anonymous drop boxes, and luck.
PS: Most users are not power users, and won't download the extension and manually check signatures.
As mentioned in the writeup, there's a beautiful way you can protect even non-power-users. Because the extension downloads and verifies the webapp HTML, CSS and JS every time it runs, the web app is constantly being validated.
As long as you have a critical mass of power users who installed the extension, an adversary cannot tamper with the web application without immediately being noticed.
A strong adversary could still commandeer the server and serve tampered JS to a specific IP without being detected. Users who are specially targeted by such an adversary must either install the extension, use Tor, or both.
My goal is to make Scramble usable by a wide range of people. For a nontechnical user, it's just as easy as using Gmail--and at a minimum, they get the advantage that Scramble servers never store plaintext.
A user with stronger requirements can do more, and can get stronger security guarantees.
> As mentioned in the writeup, there's a beautiful way you can protect even non-power-users. Because the extension downloads and verifies the webapp HTML, CSS and JS every time it runs, the web app is constantly being validated.
Imagine the following. An Attacker manages to hijack your server. They fingerprint[1] the browsers of each user and only send malicious JS to certain users that dont use your extension. No one will ever know, that they have been compromised.
WAT. This is a PGP application that works as long as a specific user isn't targeted. PGP did better than that in 1995: keep your key safe, and if the NSA has a 50 foot poster of your head shot hanging in the lobby of Fort Meade, they still can't decrypt the message.
When you build and promote a system like this, you are assuming a responsibility on behalf of your users. You should take that responsibility more seriously.
Great is the enemy of good. Especially when great is not even possible to achieve (wide public adoption of PGP).
EDIT: And you seem to be saying that this is actively bad, which I think is just jumping the gun without identifying any actual issues. Having it be only partially secure until you install a browser extension and then having it properly secure most certainly falls into 'good' and not 'bad' or 'great'.
This is also a PGP application that most people can benefit from immediately (compared to using Hotmail), and one that can be used by even the most targeted users if their environment is set up correctly once the extension is out.
Usually, when people claim to have refuted our crappy old article on Javascript crypto, they have some misguided but at least potentially falsifiable argument for having accomplished that; for instance, "there are browsers with secure RNGs now".
You don't even have that. From what I can tell, you have literally no argument at all; instead, you "agree" with the article while drawing exactly the opposite conclusion that the article draws, then point out again and again how much users don't want to install things, as if that changed the security of this system at all.
My argument is this: "This is also a PGP application that most people can benefit from immediately (compared to using Hotmail), and one that can be used by even the most targeted users if their environment is set up correctly once the extension is out."
Scramble performed encryption on the server, then my argument is false. But the design of Scramble's protocol is such that the security of the application depends only on securing the client code, which I argue can be done, for those who set up their environment correctly.
That's the point. If you have to "setup the environment correctly" you should implement a native client. GPG is secure because it transfers data...not code. That's actually the webs biggest problem...you can't trust code given to you by some stranger. If you transfer data you can verify that it hasn't been modified in transit. Its really impossible to verify that code given to you performs the way that you think it will.
>> I want to make it as easy as possible for users to try out and adopt. Just testing it out? No installation required.
Why not offer an option to try it out in the browser(with same GUI), and if they want security tell them to install it(and verify)? It would be just a bit harder than installing the extension.
BTW:there's another use case for extension based encryption: As the backbone for private messaging for various sites(which requires integration with the browser) , for example reddit.
private messages in reddit are :
(a)viewable to sniffers due to http usage
(b)can be viewed by reddit staff
(c)can be viewed by dragnet and targeted surveillance
An extension would definetly reduce attack scenarios, and has a good viral marketing vector. In fact implement the right way i could see it being a very reliable way to making something like this popular.
It's been a while since I've tried, but I think there are GPG plugins that add buttons for encryption/decryption right to web forms... FireGPG used to integrate directly with GMail but I don't think it works any more.
1. We could require the signatures to be recent, and if there are any certificates that need to be revoked, we could require that all signers include that information in the data signed. What do you think?
2. If other hostile browser extensions can hijack the validation process, then the only solution is to not have other browser extensions. Paranoid users who need strong security should boot from a fixed image that has a vetted browser with only one browser extension already installed.
Re: 1. Yuck - that leaves a window of vulnerability as long as "recent". It also opens up the possibility of hijacking "recent" via NTP to change the client's clock for just the duration of the HTTP GET to the update server - this is not as difficult as it sounds. A better approach would be to have the client pull a list of revoked certs along with the JS source/hash. You'll need a majority wins system for the revocation otherwise the compromised cert could be used to revoke all other certs. Also you need to avoid replay attacks, so you'll have to embed a chain of all previously signed hashes/revocations as well (just being "recent" is not good enough).
Re: 2. Yes, you can't have any other browser extensions. But that is totally unrealistic. It's just a a complete showstopper.
Even a bookmarklet could bypass your cryptosystem!
>> A better approach would be to have the client pull a list of revoked certs along with the JS source/hash.
That's not much better, as the client will have to pull a list from somewhere, and that somewhere could have been compromised to serve a bad list. At least with a window you always need an absolute majority. I'm not liking any of these solutions, so you make a good case for packaging an extension with all of the client code!
>> It's enough for me because I don't use any browser extensions. Maybe it's a showstopper for you because you use other chrome extensions that you don't trust. Here's an idea -- we could offer the client as a standalone application with embedded Chrome. Then you can have your untrusted browser for all your normal browsing purposes, and a separate Scramble app.
I have no idea what you mean about a bookmarklet bypassing the cryptosystem. Are you suggesting that a user clicking on a malicious bookmarklet can thwart security? The user can always thwart his own security.
Yes it is. As I said you'll need a majority-wins system, i.e. the list of revoked certs is signed by a majority of the signature committee members. Using a window is wide open to attack, it has no redeaming features.
> It's enough for me ...
Unfortunately, "it works on my computer" is not going to cut it. I use a few browser extensions, but now those extensions' own update mechanisms can be used to attack your cryptosystem - even a "good" extension could be hijacked this way. That gives me a lot to worry about...
My point with bookmarklets was that many bookmarklets pull down code from an external server and inject it into the local page, so if the user makes use of any booklets as part of their email workflow then those bookmarklet sites now become attack vectors for your cryptosystem. Likewise, browser 0-days are also attack vectors for your cryptosystem which would not be present in say, a stand-alone client. The attack surface area of a browser is huge.
You can have a signing committee vet the asset files and publish the signatures, which the browser extension looks for. It allows an upgrade path without having to go through the browser app store. For example, browser extensions installed using the Chrome app store updates automatically, which implies trust in Google.
The browser extension would only load vetted assets onto the DOM. In other words, you visit the site by opening a new tab and clicking on the extension. It then loads assets and checks that all the assets are signed by a trusted list of code-vetting signers. If all the signatures look good, then it loads the assets onto the DOM. This is no less secure than having all the client code in the extension.
Your comment makes absolute sense if the extension is attempting to validate asset files after they'v been loaded. But the extension can be coded to only load vetted code. So I'm not sure what you're talking about. Please enlighten us.
If the entire system lives in a browser extension, you're doing what I said in the comment at the top of this thread --- and why would you bother "loading" anything from the server at all? If you know the hash, you know what the file is supposed to be.
Otherwise, almost every rendezvous you have with the server is an opportunity for methods to be rebound and your cryptosystem to be subverted. If you don't see how, it's a good exercise to investigate.
Or just have the signing committee publish signed JS code. There's no real difference between that and publishing a signed hash. Unfortunately other hostile extensions are still free to mess with the DOM, so it's all a bit pointless.
I see. So, if I set up an intricate browser clean-room environment, and I assume that the Javascript verification system you come up with actually works, I can get some of the benefit of simply installing GPG and using POP mail.
You'd still have to vet your GPG install & mail client code. Also, it's not difficult to fork the code to create a client that doesn't use the browser at all. It just hasn't happened yet.
How do you know which public key belongs to the address? Key servers? Web-of-trust? Look it up on their homepage?
Scramble has an address -> pubkey resolution system which balances security with usability.
At least with Scramble you can create a secure USB stick to boot from with all of that preconfigured. Any solution that uses bare GPG still has severe usability problems.
Just to clarify, as I've seen many of your responses to posts on this subject, you're advocating against use of javascript encryption in client-side applications served through the browser.
If the encryption library were running on a server using node.js or packaged into a mobile app using a framework like Phonegap it doesn't matter that the implementation is in javascript, does it?
The issue is with browser Javascript, specifically with sensitive Javascript that has to coexist in the same runtime and even the same variable scope as content-controlled code.
This relatively old webpage[1] talks about the possibility of side channel attacks in javascript(among other things).I believe this applies to Phonegap.
Thank you, these sorts of timing attacks are very interesting. I was asking for clarification because I am curious about vectors such as these; vulnerabilities in JS runtime or in HTML/CSS. This thinking was spurred by an article posted here not to long ago[1] about timing attacks on CSS and SVG shaders through requestAnimationFrame.
Think about it this way: if you already had a guaranteed-secure method for delivering a hash... then you don't need a hash, you could just use this magical secure delivery mechanism to deliver all the code directly -- or even your whole email message!
Because when the JS files are updated the browser extension will have to fetch a new hash from the server, but how can that hash be trusted? The usual mechanism would be to RSA or DSA sign the JS files and have the extension validate the signature against a public key. However, we're still choosing to trust the server and hope that it's key is not compromised.
Non-programmer here. I've gotten great advice on work and links to great resources that have helped me change the way I work. It's not an overstatement to say that great writers like patio11 and Zach Holman have thoroughly changed how I see work, collaboration, and how it should interface with my private life. I don't run a SaaS app (yet) and I certainly don't make open-source contributions, but the principles of their posts can apply to a lot of fields that aren't purely technical. In the last year and a half since I started regularly reading I brought a lot of great ideas to my previous employer and then changed my career for the better.
tldr: Everything I've read here has given me immense clarity in how I personally view life and work, and I'm happier for it.
DigitalJack might claim to not be a programmer, but if s/he knows enough to ask "Why can't you just digest or hash the JS?", that's pretty programmery.
Besides, "Programming" is a large spectrum; I know jack-all about javascript but am comfortable building CRUD apps in C# and am in the intended audience of "don't build your own crypto because you're an idiot" blog posts. I consider myself a programmer (even though I have a lot to learn) and had the same question in my head as DigitalJack.
There is a lot of stuff on HN, with a slim majority being programmer-centric. HN is a place to pique one's intellectual curiosity regardless of coding background. I think there's a lot of coding stuff here because we happen to be on computers to access HN and, as people who are curious, we're likely to have already dabbled in tuning or coding for the computers we are on and other systems.
Apart from a large portion of posted articles not being specifically about programming, there are many great posters here who are non-programmers such as tokenadult. Discussions and articles are not limited to programming. Moreover, the spirit of HN seems to be discuss what the community deems worthy (upvoting) with a few restrictions.
It is, but mostly because of the low barrier to entry in software.
I'm technically minded though--chip designer. Software is a hobby for me. I guess I more meant that I'm not a JavaScript programmer and don't know the subtleties of that ecosystem.
Wouldn't that require the user to install the browser extension to use the service? That would lock out a lot of people that would find this service useful (eg. Tails users).
SO WHAT? This is absolutely the most aggravating thing about JS crypto advocates: they truly believe that bad engineering can be turned into good engineering by sheer wishfulness. It's important, they say, for people who can't install new software to have encrypted messaging; therefore, browser javascript cryptography has to work. No.
It's nice to see so many projects attempting to solve this problem. I have a question specific to the documentation on the main page:
Private keys are stored on the server in encrypted form. The key derivation function used is as follows:
K = scrypt(Passphrase, Username)
I'm going to skip the "JS crypto is bad".
What I'd like is clarification of the Zero Knowledge section that the keypair is decrypted client-side only and the passphrase is never transferred to the server for any reason. This is important because a state actor or other person with sufficient influence--such as a big wrench--could intercept this via code installation on whatever servers perform this work.
There's still the issue of JS being modified by the same evil actor (which you mention), and I'm not sure if signing with a browser extension is much insurance (that's just replacing code that can be modified in transit/on update with other code that can be modified in transit/on update), but with so much attention being paid to this problem as of late, an adequate solution will be discovered (if it doesn't exist already).
Even with the brute protection provided by scrypt, it's very worrying that you can assume the encrypted private key will A) Be available to authorities via (secret) court order B) Captured and stored by NSA types. The security of the key could only be assumed to be as high as the weakest passphrase ever used by the user. With what we know about most user's password security (especially the type not already capable of using GPG, which would seem the target market of this) this seems like a very bad idea.
Why go through all the trouble of attempting strong client side crypto, only to store the private key secured only by a passphrase on the server?
Because only a subset of possible users has clientside storage available for keys, and this project doesn't care about security, it cares about maximizing the number of users it can obtain.
I understand the attraction. You want to solve this problem as if it was just a straightforward hack. I spent a few days looking at this same problem (harpomail.com) and abandoned it because, in then end, It seemed like I could only provide a false sense of security to people who could conceivably lose their lives because they trusted me.
If it had just been a technical risk or a financial risk to myself I would have pursued it, but some risks are heavier than others.
Cool, this was the architecture I sketched in my head for a user-friendly PGP app, although I would have offered to sync the private keys between device clients, rather than doing it all in browser (that way, someone can opt out and keep a key just on their phone or whatever.)
I had a damn good name, too, but good job someone else did it so I don't have to! :)
Thanks! We're working on improving the UI. A few upcoming fixes:
* Clear visual distinction between encrypted and unencrypted mail
* Keep track of read vs unread mail
* Allow users to enter a public key for an external (non-Scramble address) and sent encrypted mail to existing PGP users
* Basic search
so, you want to send someone a message. you don't care if someone sees you sending it. but you don't want anyone to know whom you sent it to, or what the message contains.
requirement 1. message integrity and security
1a. use public-key cryptography
- step 1. Person A writes a letter to Person B.
- step 2. Person A encrypts a message using Person B's public key.
- step 3. Person A sends encrypted message to Person B.
- step 4. Person B receives and decrypts the message using Person B's private key.
requirement 2. clandestine recipients
1a. coded messages
- con: coded messages are usually broken after enough messages pass
1b. steganography
- not difficult to detect if messages sent often
1c. dead drops
- pro/con: MITM has to monitor the drop box to determine recipient
1d. peer-to-peer message passing
- can still determine recipient using statistical analysys
i think if you took the tor model for private services and removed the open circuits, and added a more evenly-distributed constant stream of garbage messages sent to random peer addresses, it would be much less likely that any one message could be linked to any two nodes with certainty. this would hinge on the premise that all nodes are constantly receiving messages (mostly garbage) at random. this would probably be a horrible thing for network bandwidth, but luckily most e-mails are very small.
Yes, we're on the same page! We have an idea for how fake messages would be passed around fake friends, and how to grow this "dark" network organically. Let's collaborate!
I think you're going to be disappointed. The only way this could work is based on network participants, and that's highly subjective. But here's an idea that might be plausible.
You have four nodes in your network: B, C, D, E. B wants to send E a message, but doesn't want anyone who might be observing the whole network to know.
B sends garbage messages all day at random to all active peers on the network, or as close as possible. We'll assume an even distribution of these random messages. When it comes time to send the correct message, it gets delivered just like all the others. This really only works if each message is received by a real peer.
What you get is a constant stream of communication at random, in which case you know 99.9999% of it is junk, and maybe 0.0001% is real. At this point the observer will start drilling down into everything possible to increase the probability of detecting the authentic message. You'll have to prove none of their methods would be able to improve that quality, but an actual cryptographer would be a better person to ask about that.
If you design based on trusting other nodes (committee and/or trusted peer groups), you have to inherently trust those nodes, and they have to be highly reliable/available, which is kind of ridiculous if you're trying to account for things like oppressive nation states.
If you design based on anonymous communication, it's still subject to analysis of network traffic. Even if the messages you're sending are secret, and you have some peers in the middle storing and handing things off, you can still tell who is sending something and who is receiving something and tell with a high probability if the two events are linked.
If you design based on the premise of hiding amongst a bunch of nodes, or wandering randomly through a maze of lots of nodes, you're dependent on lots of nodes and their expected behavior. The number of nodes may diminish, or their routes may be controlled, or their behavior changed, depending on the influence of the observers/controllers of the networks.
I'm of course only speaking about the delivery mechanism. The encryption of the message on either end is the easy part. It's getting it over there secretly that's really hard.
Users don't have to manage and back up their keys---or even know what a key is. Scramble does it for you.
As a user, why would I want to delegate my key management to you? Encrypting the email I want to send to a person with their public key is what gives me a sense of trust that what I am sending is encrypted and only readable to that person.
However--as the example of Tormail and others show us--the server cannot be trusted to serve uncompromised client-side code, even if the organization behind it is well intentioned.
> As a user, why would I want to delegate my key management to you?
The beauty of it is that Scramble encrypts and signs whenever the recipient is also a Scramble address--automatically. The goal is to protect even users who don't know what a "PGP key" is.
(Don't get me wrong, we're planning improvements for power users as well--for example, we want to make it easy to use Scramble over Tor. But the goal really is "Encrypted email for everyone".)
> So then why am I trusting you with my PGP keys?
No, as explained in the writeup, the server stores your private key encrypted with your passphrase. The server never sees your passphrase, or your private key.
We've used a good key derivation function (scrypt)---this makes it difficult to brute-force the password.
In short: Yes, the server stores things so that you get the Gmail experience, sit down at any computer and it just works. No, the server never sees your plaintext private key.
Thanks for the feedback! If I get a chance, I'll paraphrase your questions and add them to the upcoming FAQ.
> No, as explained in the writeup, the server stores your private key encrypted with your passphrase. The server never sees your passphrase, or your private key.
The only place a private key belongs to is the users machine (preferably somewhere without internet access).
Whenever I see a site with the fixed background images and scrolling content panels (where the background image is periodically visible again), I spend more time trying to figure out what's contained in the background images through those gaps than I do reading the content.
I'm glad to see more projects taking off with the goal of secure email.
I found it really annoying that this service has xkcd style password requirements. My 9-character password with non-alphanumeric characters should be sufficient.
Precisely which vetted, client side, JS crypto library would you prefer the authors use to satisfy your concerns? I haven't looked at the code, and only recently started paying attention to the JS-Crypto space. I'm aware of implementations where scrypt was compiled to target the asm.js subset, however, this doesn't really mean the resulting JS running on platform x, in browser y is as secure as the resulting binary on the platform/OS it was targeted for.
Scramble has not been widely vetted, so don't rely on it to protect you just yet.
The authors put the above line as part of the very first bit of marketing you encounter. Until the JS has full (vetted, industry standard) crypto functions designed to be secure for the each target platform, vetting this kind of crypto is going to be hard. The addition of a cryptographically sound PRNG is a big move in the right direction. That said, I believe the authors got it right with what is currently available: Inform the user in such a way that there is no confusion, open source the code so others can participate in vetting, get some attention to the project so others are motivated to participate in vetting, and continue to improve as problems are discovered. That's really the best you can do, IMHO. In security, the author of a library needs to be correct 100% of the time while an attacker needs to be correct only once.
You can't serve dynamically-loaded crypto code. What happens when someone hacks gmail and replaces crypto.aes.js with crypto.plaintext.js and suddenly every gmail user who thought they were sending encrypted mail are just sending plaintext messages?
Crypto code needs to be packaged/signed/verified and cannot load in code dynamically without running the risk of completely compromising its security.
This is why it's just not possible to securely serve code that does crypto in a web app. Browser extensions are the next step up, but they also have to be careful to never load code from any external locations (among other considerations they have to make when running in a browser environment).
Yes, I agree; the fact that it's unclear if it's possible is what makes it the holy grail, it's unclear if the holy grail actually exists, but lots of people really want it. :)
If you just expose your service as an API then you don't have to worry about users trusting code given to them by the site. Then use a native client app that doesn't suck to interact with the service...all of the encryption is done client side with all of the messages being encrypted end to end.
JavaScript encryption just isn't really valid in browsers...the browser runtime is to blame. Its funny that people have to all learn the same lessons over and over again...it's a worthwhile goal...keep at it, just try a different approach.
Its [sic] funny that people have to all learn the same lessons over and over again
That pretty much defines what the field of computer security is. From DES to AES, we learned the same lesson: As things designed to be secure are put out in the real world, with sufficient enough time, they're broken. It's important not to make the same mistakes again and again, and that's nearly impossible to do with JS-Crypto since there are so many permutations of platform X on implementation (browser) Y. As long as the numbers of platforms is large and the numbers of browsers are large, having one implementation that works properly in all platforms isn't practical.
However, a specific implementation targeted at a specific browser/platform could be vetted provided the JS engine handles random numbers in a cryptographically sound manner. Ideally, the browsers would expose ways to call vetted cryptographic APIs directly via JS.
That really doesn't change the equation. That's only if you can trust the client code...and it just isn't feasible with JS sent to you from a 3rd party.
That sounds much more interesting. Solving key exchange and message transport in a friendly way, then using a native client has possibilities. I think that's the direction [Hemlis](https://heml.is/) is taking.
That's not a bad idea. Take the browser engine and embed it in your app...then never load code that isn't your own. That's not a web app...thats a native client.
https://news.ycombinator.com/item?id=6637915
https://news.ycombinator.com/item?id=6420739
https://news.ycombinator.com/item?id=6353137
https://news.ycombinator.com/item?id=6317685
(That's just the last few weeks).
We are also developing a browser extension which will verify the Javascript loaded from the server. (Until then, an attacker who gained control of a server could tamper with the JS to steal a user's credentials the next time they log in.)
Baffling. If you're going to build a browser extension, put the crypto code in the extension. This still isn't a great solution, but it's better than trying to "verify the Javascript code". How exactly do you plan to do that? Surely you realize you can't just digest the JS asset files.