Hacker News new | past | comments | ask | show | jobs | submit login
Firefox Send: Free encrypted file transfer service (blog.mozilla.org)
2031 points by dnlserrano on March 12, 2019 | hide | past | favorite | 512 comments

I've been building a fully featured CLI tool for Firefox Send, supporting this new release.

For anyone that is interested: https://github.com/timvisee/ffsend

FWIW, I built and successfully ran it on FreeBSD-current. The only hiccup I ran into was that it puked building due to not having /usr/local/lib in its lib search path & not being able to find libxcb. I had to manually add -L/usr/local/lib to the cc args and manually link it. Not sure if that is a FreeBSD issue w/Rust, or something in your package.

At any rate, the tool works! Thanks so much.

Thanks for sharing your solution! Not sure what is causing it (maybe it's OpenSSL binding related), and am currently not really targeting FreeBSD yet.

I wasn't fully ready with this tool for the Firefox Send release to be honest, would have loved to be able to provide better binaries and packages for more platforms, which are a work in progress.

If you believe you can improve the README with your solution, be sure to submit a [PR](https://gitlab.com/timvisee/ffsend/).

Happy to see it's working! :)

It's a BSD world thing :). Local (i.e. non-system) executables and libraries go under /usr/local around here (i.e. libraries under /usr/local/lib, binaries under /usr/local/bin and so on, the hierarchy under /usr/local has the same structure as that under /usr).

You can work around it by creating a cargo config with a wrapper. Eg:

cat ~/.cargo/config

  linker = "/home/drewg123/bin/cargo-ld"
Where cargo-ld is just a wrapper:

  exec /usr/bin/ld -L/usr/local/lib $*

For proper formatting of code snippets, indent the entire snippet with two spaces.

> the hierarchy under /usr/local has the same structure as that under /usr

So I can have /usr/local/local/local/local...? :)

Is there an autotools equivalent in Rust land?

Depends on what you mean exactly.

I just had the same issue with rust linking to a 1553 bus library in /usr/local/lib yesterday. Seems like this should be on the search path.

At least on BSD, you want to be able to separate external 3rd party libs and system libs in case they overlap, so that is why BSDs don't automatically include things under /usr/local.

Python cli version at https://github.com/ehuggett/send-cli

disclaimer: I haven't used either cli version.

Cool. Sadly, I don't think the client supports the current Firefox Send version though. Method of encryption has been changed during the last few months.

Love the demonstration on the Github page!

This is great, thanks :)

Mind if I port this to JS?

You don't have to ask for permission to fork open source projects.

You are free to port the project to JS as long as you follow the applicable licenses: https://github.com/timvisee/ffsend/blob/master/LICENSE

Thanks a lot. The first thought after seeing this was that I wish it had a CLI and I know I am lazy enough to never write one.

do I need install firefox to use this tool? looks neat!

No, see the requirements here: https://github.com/timvisee/ffsend#requirements

Along with ffsend, you can use any browser to upload/download files through https://send.firefox.com/ as well.

Nope, you don't need Firefox to use the site anyway.

This is such a fantastic tool to have, thank you so much!

This is neat, thanks.

In the not so recent past, HN'ers loved to quote tptacek's legendary rant about how in-browser JavaScript crypto is fundamentally broken[0].

What changed? Is that rant finally outdated? Couldn't Mozilla at any time serve a corrupted JS bundle (with or without their knowledge) which would leak the key somewhere, silently replace the encryption by a noop, etc?

I ask out of interest, not skepticism. I much prefer an internet where we can trust web apps to do proper crypto than one where we have to depend on some app store to somewhat adequately protect us.

[0] https://www.nccgroup.trust/us/about-us/newsroom-and-events/b...

Some of those points are relevant and some aren't. For logging in to a website, "just use SSL/TLS instead" makes sense, but not for this use case. There's better options nowadays for doing crypto in the browser, but I wouldn't be surprised if they were at least theoretically vulnerable to side channel attacks from JS running in another tab.

The main thing is that unless you're paying really really close attention to the JS that you're executing, you can't trust this any more than you can trust Mozilla and the security of whatever computer is serving their pages. I wouldn't use this for sending data that you're trying to hide from a nation-state, but it looks like a great option if you want to send a video to your grandma without posting it publicly on the internet or teaching her how to use GPG.

Followup question:

I have Signal running on my Linux computer and on my Android phone. On the Linux computer it doesn't have root access, but it does have access to its own files, so in theory there's nothing to prevent it from making a network request and updating itself. Additionally, I don't ever check Signal before installing a new update, I just blindly do it.

On my Android device, I also have auto-update turned on, because my only option is to turn it on for every app or none of them. So there's nothing to prevent Signal from updating itself and changing the crypto. If I were on an iOS device, I wouldn't even have that option -- to the best of my knowledge you can not turn off app auto-updates on an iPhone, but maybe someone can correct me if I'm wrong. In any case, it doesn't matter that Signal is updated "rarely". An attacker only needs to install one back door, they don't need to update it a hundred times.

So for an extremely typical user like me, who has been taught for as long as I can remember that the most secure thing you can do on an OS is install updates as they come in when they come in, doesn't Signal have the exact same problems as Mozilla? If someone compromises Signal's servers, can't they add a side-channel just as easily?

In theory, I could disable auto-updates and only update Signal when I looked at the source code, just like in theory I could examine the JS that I'm executing every time I connect to a site. But in practice, I don't.

When I read tptacek's rant nowadays, the immediate thing I can think is, "The web is malleable? Literally every single computing environment and device I own is malleable." It feels like if I were to take tptacek's advice to its logical conclusion, I would just conclude that ETE encryption in general is dead.

Yes and no, depending on your threat scenario.

I would assume Signal to have a proper signing infrastructure in place, so that the keys used to sign new releases are not available to the server hosting/deploying the actual update files (or providing them to Google/Apple for that matter). So simply taking over that server would not be enough, as malicious updates could not be installed.

Assuming Moxie goes over to the dark side, however, you are screwed. There's nothing stopping your Signal app from bundling all your plaintext messages once you've entered your password and sending them off to China, save maybe a firewall you have in place. Google or Apple might stop such an update during their reviews, but I wouldn't bet on it.

Signing infrastructure does seem like a significant improvement over Javascript delivery, but does that also carry over to platforms like Windows?

Again, please correct me if I'm wrong, but Windows doesn't do anything with signing app updates, does it? Come to think of it, I'm not 100% sure my Linux version has this either, since Signal isn't being distributed as part of the official repos.

If Signal is being updated on Windows without validating any kind of signature, could a compromised server even pull off the "send a malicious payload to only one IP address" attack that people talk about with the web?

While Windows does allow for code signing of executable files in general, I doubt Signal is using their system. The official windows store would probably work similarly to how Apple and Google handle updates, but Signal doesn't use it either.

You can always implement signing yourself, though, without relying on somebody else's infrastructure. Just include the public key in the app itself and use it to verify your updates are properly signed by your private key before accepting them. I haven't checked but assume/hope Signal is doing this with their updated JS packages.

If none of this were to happen, however, then the answer to your last question is "yes", though with a caveat: If Signal's servers are compromised and push out a malicious update, then all bets are off, as the app running on your system has access to all your unencrypted messages. If the compromised server is only one of the messaging/relay servers, however, things are not as bad, as they don't have access to your keys and thus can't decrypt your messages. They can still forward them somewhere else for later decryption, but thanks to perfect forward secrecy this is currently rather unrewarding.

So the takeaway I'm getting from this and a few other comments is that, in general, running automatic updates for most software is still more secure than not, since a 0-day is more likely than a compromised server.

E2E encryption is still valuable, because assuming that the codebase is delivered/signed separately from its app servers, it decreases the number of available attack points. It's usually easier to secure code delivery than it is to secure your entire backend/database. It's even easier than that to secure a private key that you never put on your delivery servers in the first place.

JS has some additional concerns regarding stuff like Spectre and random number generation, but ignoring those for a sec, E2E encryption is in theory viable and valuable on the web, assuming you've split your backend from your code delivery endpoint and are taking extra steps to secure those specific code delivery servers.

But E2E encryption on the web could be improved a lot if we expanded code-signing for browsers. We download code over SSL, but that's just to make sure no one MITMs the server itself. We could, in theory, have some kind of signing system for raw assets that was completely unrelated to the connection to the server -- a "only allow Javascript to download/eval on this domain if it's signed with a key that's not even stored on the delivery server at all" policy. But we don't have anything like that yet.

Is that a reasonable interpretation?

> On my Android device, I also have auto-update turned on, because my only option is to turn it on for every app or none of them.

Open the Signal store page and click the dots in the top right of the screen and untick Automatic Updates.

Oh crud, thank you!

I didn't know that, and there are a few apps that I definitely want to use this with. Why on earth isn't this part of the general settings?

If there’s a remote code execution vulnerability, normal users will update but you won’t. If you are voluntarily replacing automatic updates with manual processes, be sure to update Signal before using it each time, or a nation-state can tap a zero-day to infect all the experts who know better than to leave auto-update enabled.

You can absolutely disable app and OS auto-updates on iOS.

This is really the point here. But the danger is always that someone who needs strong nation state secure crypto is used to this and doesnt realize the implications of using this when trying to keep state level secrets.

Indeed, you can rest assured that this will be used to share passwords that should not be shared this way. I would be surprised if it hasn't been already.

It's not outdated; it remains fundamentally true. But I'm uncomfortable with people calling it a "legendary rant" because it was dashed off and I never promoted it as any kind of last word on the subject. There are better arguments against browser cryptography than mine.

In particular: you'd hope that WebCrypto would have changed things a bit, but, of course, it doesn't: it leaves all the cryptographic joinery up to content-controlled code. You're probably somewhat less likely to have side-channel flaws if you use it, but in reality timing side-channels are more talked about than seen. Who would bother, when they can just deliver a script that exfiltrates secrets directly?

You have said a bunch of useful stuff in HN comments that people end up pointing to, but in those comment rants you also have a tendency to leave things hanging or allude to things without further explanation (I think for fear of being boring), or to assume people understand the context of a long-running debate.

I think you should consider hoisting more of this stuff out into standalone blog posts that you can flesh out and also update as circumstances warrant. I don't think I'm the only one who has learned a lot from reading you, but often felt myself wishing it had been dumbed down a shade for beginners.

Maybe the best argument for it is that blog posts remain mutable and you can add and expand as necessary, unlike these HN posts that are frozen in amber.

This place has basically ruined me for writing. I used to sort of know how to do it! The idea of writing a top-to-bottom "browser Javascript is evil" post is intimidating to me now. It was intimidating when I wrote the post referred to above! And that one wasn't even good!

I'll work on it.

One idea is to get a volunteer or hired goon to simply collate your HN posts and post them somewhere editable. Then when you read them over, you'll be horrified and the editing instinct will kick in.

I don't think it's you who made it legendary. I think it's the HN commenters who keep linking to it who did that (myself included, since yesterday).

And, well, you may disagree but to me it definitely reads like a proper rant :-)

Please note that I chose the words "legendary rant" with all the love imaginable and I had hoped you'd interpret it as nothing other than a compliment. I much appreciate your contributions to HN and the internet as a whole.

>There are better arguments against browser cryptography than mine.

mind pointing to or sharing them?

SubtleCrypto is a new browser-adopted spec for performing crypto operations natively. For example, instead of using Math.random() for random number generation, you can use https://developer.mozilla.org/en-US/docs/Web/API/Crypto/getR... in combination with the SubtleCrypto functions to work with keys securely

Your points around a compromised JS bundle are still possible but that has more to do with a company’s deployment/change management setup than JS itself imo

> Your points around a compromised JS bundle are still possible but that has more to do with a company’s deployment/change management setup than JS itself imo

But that's the only point I intend to address here. If Pascal had been the language of the web then my question would have been about Pascal.

Therefore I don't see how SubtleCrypto changes matters much.

In short, if I get it right, the argument would be that in eg a mobile app, all the e2e logic (the core crypto plus the code around it) go through peer-review, then some release management process, then some review by Apple or Google, before it lands in my hands via their app stores' well secured delivery mechanism. In a web app, a single compromised server will compromise all security instantly. Generally I'm fine with trusting Mozilla's servers, but if I have to trust their servers then what's the point of end to end encryption?

> In a web app, a single compromised server will compromise all security instantly.

This is only true if the server has access to the keys of your data. E2EE typically means that it doesn't, only you do.

In a browser, the server serving the JS has an opportunity to access the keys.

This is the case with all E2EE tools. You have to trust that they do their crypto correct and that they aren't evil. As Firefox Send is open source you can setup your own server if you don't trust Mozilla, but then again, if you don't trust Mozilla you might want to eyeball their code carefully first...

Your description is very simplistic, but yes, you have to trust the code that's delivered to you. For example, no Android/iOS user would check every single update to E2EE apps they install for backdoors. However with web, there's an opportunity for a backdoor in every single request and the server can ship different code to different users. In my opinion, using web cryptography is still worth it, but it's definitely more risky than native apps.

subtlecrypto has API to generate a keypair that you can't extract and access from the JS side. You can only use it to encrypt/decrypt buffers, but not access the key itself.

Sure, but having access to encryption/decryption/key derivation is pretty much equivalent to having the key in most circumstances. Plus, JS generates the key and sets "extractable" flag.

And how does that work with Firefox Send? Isn't the key somehow in the payload or the URL?

Without knowing almost anything about Firefox Send, it does seem that the key is embedded in the link you give to your friends. In that case E2E means that the key is not stored on the server. In order to guarantee that, the link is probably in two parts, one that identifies the file on the server and one is the key. The key part of the link is probably generated on your machine and thus never sent to the server (to prevent it being in any logs or what not). So if Mozilla's servers are compromised the attacker still would not be able to decrypt your files. Of course if the server is compromised the attacker could serve up malicious JS for future uploads.

If it works, it prevents mass surveillance and makes insider attacks much more difficult.

Didn't realize it had full support by every browser, even ie: https://developer.mozilla.org/en-US/docs/Web/API/Crypto/getR...

The problem with that table is that this only lists the entry methods, which are supported in all browsers. However, the actual work is hidden behind parameters, not all of which are supported by all browsers, and some have to be in weird combinations. One example is that Edge does not support PBKDF2 in any form, which makes many of their further support a bit weird to use.

Here's a site where you can test your browser's compatibility with many combinations: https://diafygi.github.io/webcrypto-examples/

I think it uses a cryptofill shim for browsers that don't support all of the crypto api

The SubtleCrypto portion of the API is slightly less supported in that it appears to have spotty/non-compliant IE and Edge coverage.[1]

1: https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypt...

That article primarily comes down to this:

> WHY CAN'T I USE TLS/SSL TO DELIVER THE JAVASCRIPT CRYPTO CODE? You can. It's harder than it sounds, but you can safely transmit Javascript crypto to a browser using SSL. The problem is, having established a secure channel with SSL, you no longer need Javascript cryptography; you have "real" cryptography.

In our case we aren't doing crypto inception where the cryptography is meant to secure itself. The crypto is being served securely (by ssl) and then used to solve the separate unrelated crypto problem of encrypting random files.

That question seems closely tied to

> WHAT'S THE "CHICKEN-EGG PROBLEM" WITH DELIVERING JAVASCRIPT CRYPTOGRAPHY? If you don't trust the network to deliver a password, or, worse, don't trust the server not to keep user secrets, you can't trust them to deliver security code.

I haven't looked at the details of how Firefox Send works, but if you can download and decrypt the file with nothing more than an https:// URL, it seems like you'd have to trust the server, either to handle the cleartext or to provide trustworthy code to handle the cleartext.

I suppose an alternative would be to generate a data: URL, but if it has to include all the crypto code, I wouldn't expect it to be nice and compact.

> I suppose an alternative would be to generate a data: URL, but if it has to include all the crypto code, I wouldn't expect it to be nice and compact.

Sounds like a challenge for the code golfers.

Fundamentally the situation has not changed much. You redownload the code every time and servers could deliver tailored compromised versions if ordered so by some TLA. Which means audits have limited values since they can't attest that what they have seen is actually what anyone gets.

Compare with native tools which you only download once, can check its signatures and which strive for reproducible builds so that multiple parties can verify them independently.

There was also a time just a few years ago when evangelists claimed JS/CS/etc. were just as fast as native (some said faster) and blasted you for suggesting otherwise, even when it was clear as daylight this was blatantly false. This mantra also suddenly just faded away once native compilers for these gained popularity. I guess reality hits you after some time.

Now I see a similar issue with security experts preaching that merely possessing a single piece of software with a single thing they classify as a 'vulnerability' implies you will be murdered within the next 24 hours, and it seems they'll happily DoS your computer, get you fired from your job, take your second newborn, and blow up your computer in your face if that's what it will take to make you finally feel real danger. Not sure why it takes people so long to see that reality isn't black-and-white, but better late (hopefully) than never.

There are many use cases where compromise through code-interdiction after warrant is a perfectly acceptable risk. Also considering what it replaces may further increase the weight of privacy gain. Absolutism is definitely not the way to go, and looking at the state of the tech community (eg. npm, apt, pip, pacman, check that sha256 sum) we left the design-it-right-first a long time ago. A valid argument, though I wouldn't defend it to the death, is that we need to work slowly back toward more secure behaviors rather than chasing absolutely secure technologies. I think send.firefox is a step back from dropbox for some.

SSL isn't the only crypto you'd ever want to do though. What if you want encrypt data so that it is encrypted all the way through the layers of the application to the database? That's a valid use case to use in tandem with SSL. Also I have to mention cryptocurrencies.

As long as there is a possibility, I say yes - not "if" but "when."

Humans are always the weakest link with the internet and someday, sometime, bad code (unknowingly) will be pushed and something will happen to someone.

Couldn't they do the same If crypto code was on server?

Seems like Send would have to be a built in browser functionality or maybe a plugin.

Not relevant to me as all of my sites are entirely secured with SSL.

Is the source available for this? A self-hosted version of this would be nice...

(Update: Yep, just found it: https://github.com/mozilla/send, just before the comment below was posted :))

Reminded me of zerobin / privatebin. We used this internally at a previous company to share passwords and sensitive files.

Data is encrypted at client and a url with a key is generated.

Can be used 'burn after reading or with some specific lifetime.

As far as I can see, this requires S3 or a S3 compatible service. Kind of defeats the purpose of self hosting unless you can set one yourself (it may be, I didn't look).

EDIT: Apparently there's a way to use filesystem instead of S3, it's just not well documented.

Minio is your friend. (S3 compatible self-hosted, open-source object store).

I recently implemented a text/snippet sharing tool that uses Minio instead of S3, because I like to self-host everything.


I use this with Seafile to store data in Azure Blob storage - it was incredibly simple to setup and has been rock solid since. Highly recommended!

I'm curious, how do you use both seafile and minio?

Minio is an S3 proxy, allowing you to use different storage backends with systems that support S3-compatible blob storage (like Seafile).

In my case I have Minio in front of Azure blob storage, so Seafile is storing data in that.

I host Seafile and Minio using Docker Compose, which was super-simple to get started with.

FWIW, It took me about 5 minutes to get it going after I updated nodejs to version 10. It took me another 5 minutes to realize it setup a local filesystem storage in $TEMP automatically. I was impressed with how easy it was to get going, and that it picked sensible (though not well documented) defaults :)

The default npm install; npm start will give you a self-hosted app. The docker one is a different beast.

Anybody have a nice docker version to run this at home?

It looks like Mozilla publishes a container and has instructions:


I've used Firefox Send for several months while it was still a test pilot program. It's been very useful for quickly sending files to family. The fact that the link expires as soon as the other party downloads it means I don't have to worry about clean up.

Do you ever run into a problem when an overzealous email service or virus scanner pre-fetches the link and invalidates it before an actual person clicks on it? This used to happen with all sorts of links in emails, though I haven't heard about it in a while.

To my knowledge, the link is only invalidated if the person actually downloads the file. Simply viewing the link does not invalidate it. I can recall a couple of times emailing a Send link to my brother, then checking it in the morning to see if he had grabbed him or not. It was still viewable, so I deduced that he hadn't grabbed it yet, sent him a follow-up e-mail ("Hey you lazy bastard, grab the damn file before it automatically expires!"), and checked again a few hours later to find it gone.

> To my knowledge, the link is only invalidated if the person actually downloads the file.

I've used one-time-link services sometimes, and posting the link to Slack causes Slack to make an HTTP fetch looking for metadata, which then invalidates the link.

Which is why you put a trivial password on the file, which isn't included as part of the link.

Then, any automated system which sees the link cannot accidentally cause the file to be "downloaded" which would cause the link to be invalidated. They can see the link itself, but they don't have the password, therefore they can't download the content to scan it.

I have used onetimesecret.com a number of times in this way.

The easy solution is for the link to lead to an interstitial that shows "do you want to view the content? It will be the only time you can do so.", and make the underlying HTTP resource unpredictable (e.g. different ID to the link) so that it cannot be directly addressed.

It's a very common and easy to anticipate issue, I'm surprised that there are any one-time-link services left that suffer from it.

What if the download was interrupted because, e.g. the other person had a temporary issue in their internet connection? Does the server at least detect that the entire file has been sent over the socket? Does the server at least check that, on a TCP level, receive all the ACK packets it is meant to receive? Of course this still isn't foolproof but it's a good way towards detecting interrupted downloads.

Does the link expire after a successful transfer? Curious what happens if the transfer fails mid transfer and needs a retry.

No one I've tried it with has ever had it fail on them.

But to answer your question, I uploaded a 100mb+ file to FireFox Send, copied the link, RDPd into another computer, kicked off the download, and then cancelled it midway through download. The link did expire after that.

So I guess they don't have an easy way of telling whether the download is successful or not. Maybe Mozilla's engineers can figure something out if the issue is raised.

If they did, you could abuse it by just trasfering every byte except the last, add that in a custom link to complete the file transfer and have unlimited distribution ;) I think it's best the way they did it.

I appreciate you took the time to run a test to answer my question. Thank you.

Firefox might consider keying off the initial IP seen upon retrieval and extending the TTL of the object until the final byte has been retrieved.

I can see benefits to keying off the IP, but also to keying off some cookie that can expire shortly. One of the reasons I imagine a download might fail could be because of a spotty of problematic VPN or proxy, or attempting to get it from a location that can't handle it well if somewhat large (some random coffee shop wifi that's overused).

There's probably enough complexity and possibility for abuse in allowing automated requests for files again (i.e. a button on the view page) or special logic for second attempts that the safest option is just to have the receiving party ask for the file again through whatever medium originally kicked off the request (an email, an IM, etc).

Firefox could do any number of things to make it easier on the user, but I expect them to take my security and privacy very seriously and to error on the side of those ideas rather than usability, so hopefully if they come out with something it's not at odds with those goals.

Did you test to see if the download could be resumed?

In an ideal world, partially-downloading the file would expire the link, but the server would still allow the file download to be resumed (but not restarted).

If you are worried about it you can specify the number of downloads before expiration

If relevant Mozilla people are here: Send does not work if "Delete cookies and site data when Firefox closes" checkbox in FF preferences is checked. Even the page doesn't load [1]. It surely is a bug, because I am not closing Firefox.

That checkbox is #1 reason I only use Firefox.

[1] Developer console log output: "Failed to register/update a ServiceWorker for scope ‘https://send.firefox.com/’: Storage access is restricted in this context due to user settings or private browsing mode. main.js:38:10 SecurityError: The operation is insecure."

You should be able to whitelist https://send.firefox.com/ with the "Manage Permissions..." button right next to that option.

I block _all_ cookies except for a small list of sites (like HN...).

This is a current Firefox restriction: https://bugzilla.mozilla.org/show_bug.cgi?id=1413615

a bit off topic but here it goes...

This is how i think Mozilla can capture more users back to Firefox. By providing "extra" services attached to the Mozilla and Firefox brand will make them a superior product to the end user. Sure it's hard to compete with Chrome but if you offer useful features and services integrated in your Browser i see that Mozilla actually has a chance to compete with Google for the browser space.

This is one of the "advantages", if you are a heavy Google user, of Chrome over the competition is that everything is attached to your Google account. Passwords, history, spellers, dictionaries, shortcuts, etc...

If Mozilla comes with Send, Notes, Password Manager all integrated in Firefox i see a good way to bring back some of the previous users that switched to Chrome.

Along the same lines, a Gmail-esque Thunderbird web service would be amazing. I could finally de-google myself completely if that were the case.

Currently, I need to set up my own email hosting through a service like fastmail and then configure a desktop client(like Thuderbird) to use it.

A Mozilla Gmail-esque service would remove a lot of the friction there and probably bring in a bunch of users who are tired of google running everything.

Fastmail has a nice (and snappy) web interface. So you don't _need_ to set up a desktop client, unless you want to.

I've used for a contract project I worked on. It wasn't bad, but it was difficult to filter when there was a lot of messages.

You may be right, but I hate it. There is no reason I can think of to have all these tools integrated into a web browser, and the idea of having the Internet broken into silos based on your choice of browsers scares me.

We don't need another AOL Chrome.

Good to know is that Send is not a silo: you can use it with any browser.

the problem with "AOL Chrome" is that it's based on advertising and internet company services where Mozilla has a chance to provide a service, even if it could be paid, without advertising and privacy friendly like their latest experiments.

Literally all they need to do is advertise tree style tabs. It's the reason half my office stopped using Chrome.

That's an add-on right? I don't think they can advertise something that's not in Firefox's core

It's something only Firefox has. And it should absolutely be in its core. Preferably without requiring a custom CSS file to hide the old tab bar.

> By providing "extra" services attached to the Mozilla and Firefox brand

How is that different from the complaints people make about Chrome tightly integrating with Google?

> If Mozilla comes with Send, Notes, Password Manager all integrated in Firefox i see a good way to bring back some of the previous users that switched to Chrome

As a Chrome user I can confirm. But for me the main raison I use Chrome is for the dev tools a found them better than FF

For me, it's the seamless translation suite

I don't understand the end-to-end encryption claim.

1. Bob uploads a file, but specifies no password.

2. ???

3. Sue downloads the file.

Best case, Bob's browser encrypts it (with javascript?) before uploading. Either Mozilla provides a key, or Bob sends the key he used. When Sue's browser downloads it, Mozilla sends the key and her browser decrypts it client side.

In either case, Mozilla has the password for decryption. This makes a mild barrier to mass scanning content that's uploaded, so at least that's something... but that's little more than a promise I have to trust.

Am I missing something? Where is the "end-to-end" encryption? End-to-end means I don't have to trust you (as much). Please don't turn this into a meaningless buzzword...

EDIT: I did misunderstand something. Please see timvisee's comment below.

The client encrypts the file that is uploaded, along with some metadata. The key is appended to the share URL provided by the URL, in the fragment/hash, and is never sent to the remote server. Only people having the URL including the secret will be able to download and decrypt your shared file. See https://github.com/mozilla/send/blob/master/docs/encryption....

Thanks for the info. Let me see if I understand this correctly.

Browsers don't send the anchor tag (ie: with GET requests). FF Send takes advantage of this by using the anchor tag to store the key for decryption.

That is kinda novel. You still need to trust the upload client to not leak the key, but I see that you've written a CLI version. Interesting! Thanks for the response.

You got it! The only thing you'd have to worry about is malicious JavaScript on the Firefox Send website which I believe would be highly unlikely. And of course, you must keep your link secret.

Yes, such a CLI tool would help protect you against a MITM with malicious JavaScript.

Unlikely but possible. Two words: browser extensions.

Three words: My Ether Wallet.

It's not a new idea, the megaupload successor first did it as far as I can remember

Indeed, the website serves you the crypto code but it runs totally on the client so it's perfectly safe and could in no way be backdoored.

More seriously, did they do anything to fix this obvious design flaw? If they want to fish a key they can just serve you a modified JS file and retrieve the key. Unless of course you chose to audit the JS served every time you browse the website.

How would this be possible to fix? I currently don't see any possibility of this.

Yes, the mega.co.nz . End-to-end encrypted dropbox analog.

It's cool, but not exactly novel. Mega has done it this way for years.

Anybody who can catch the link in transit can get the file. Emailing these links with the decryption key right in the fragment is going to allow any party in between the sender and the receiver to fetch the file. (If the file is set to only allow downloading once, the receiver can at least let the sender know that it got intercepted.)

So you have to send the link through some previously-negotiated secure channel. At that point, why not just send the file through that channel? Is it because signal/whatsapp/etc don't allow large files or because the interface is cumbersome?

Absolutely. Security and secrecy are not binary, though, it's a spectrum. There are many things where you would mostly want to avoid dragnet attacks and undetected intrusion but don't have concerns for targeted attacks like the one you are describing.

I think this fills the gap for when you want to share not-critically-secret stuff with non-technical people and would today likely send it over something like e-mail, Drive or Dropbox.

I don't disagree. I've been thinking about how I would write the announcement copy explaining to non-technical users how the links should also be treated as secret and how email is not encrypted in general. It's a hard problem.

I think it's relatively intuitive for lay users that the links are secret. If you give someone the link, they get the file. What's not obvious about that?

If anything it's probably harder to understand for a somewhat semi-technical person who probably has started to think about encryption and so on but hasn't got far enough to spot that oh - the secret key is in the URL itself as an anchor and so the URL is the secret.

Computer Security is often nicer here than real world physical security, because we are often able to make the extreme cases so implausible as to be irrelevant, enabling intuitive statements to be true in practice rather than subject to endless caveats.

For example a lay person sees a padlock and they imagine that it cannot be opened except with the padlock key. And this is untrue in lots of ways - so a more technical person may think of some of them, and identify that this particular brand of padlock defends against those well, but not realise that other problems are undefended.

So this means the truth about the padlock has to be more nuanced and relative. Breaking the lock open with tools is "difficult". Picking the lock "cannot easily be done in under a minute". But lay people don't like nuanced, relative statements. It sounds a lot like this padlock won't really stop someone stealing my bike! That's because it won't.

But in computer security we often can make these cases irrelevant in practice. What if someone just tries all the key values for this AES encryption? That's fine, there are so many that even if they could try as many as there are grains of sand in the world, every second, the sun would burn out long before they had a meaningful chance of guessing the right one.

It's intuitive for lay users that the links are secret, yes. What lay users have trouble understanding is that putting something in an email automatically makes it not secret.

Right. If I were really really concerned one would encrypt it locally then send...which you could do in ff send.

True, but I don't think I'd use this website for anything I'm that concerned about. At that point, I'd encrypt it myself with something like gpg or openssl on the command line.

This fills a handy gap for a lot of people with smaller needs.

> I'd encrypt it myself with something like gpg or openssl on the command line.

> This fills a handy gap for a lot of people with smaller needs.

You point out exactly the problem: the people who are technical enough to deal with GPG's UX competently are also technical enough to evaluate whether they should put a particular document through this Send service.

I don't think nontechnical people have "smaller needs".

Yes. I would like to add though, that you can set an optional password as well. Without it the link would be useless. You can share it through a different channel.

That's cool, but it's still the same party providing both the storage mechanism, and the JS that encrypts the content on the client-side. You have to trust them that they are not "peeking" at the keys you are generating using their code in your browser.

But at least you only need to trust them now, and not every incarnation of them in the future =)

Not to mention that onr could always encrypt it themselves then use ff send

This is where WebCrypto in browser extensions begins to get interesting, I think.

I use a similar mechanism on my website https://expiring.link

I'm working on documenting the code now before I release on GitHub, but it works on the same premise :)

WebCrypto is mana from the gods...

Won't most people just share the links over email? With the decryption key in the url I don't see very good security guarantees here

Generated by random and applied to the URL hash data that is not sent to the server. Hash data is data in an URL after the hashtag

It seems vulnerable to an active MitM - if the attacker is in a position to serve malicious JS that exfiltrates the data from window.location.hash.

I think the scheme is fairly robust against passive interception though.

Client side encryption keeps honest companies honest but no more than that

It doesn't exactly meet the needs of "sending files to a non-technical person", but Magic Wormhole [0] has been truly great for flipping files around between me and anyone who is capable of being trusted to run `pip install --user pipe && pipe install magic-wormhole`. This is by no means everyone, but it's been very useful quite often.

[0] https://magic-wormhole.readthedocs.io/en/latest/ has

I have no clue why you would suggest a tool that requires using a linux command line after telling firefox send doesn’t meet the needs of non-techical person.

"It" in the first sentence does refer to the solution they mention, not Firefox Send.

No, he's saying Magic Wormhole doesn't exactly need the needs of a non-technical person.

magic-wormhole works fine on windows and mac; nothing about sh-like shells is linux-specific.

>pip install --user pipe && pipe install magic-wormhole

What am I looking at here? On PyPI 'pipe' is listed as a "Module enablig a sh like infix syntax (using pipes)", and magic-wormhole's own docs just say to install with pip like anything else.

I typoed. I meant `pipx`, not `pipe`. My phone tried to help.

`pipx` is a convenience utility for installing cli python tools in separate virtual environments and then being able to update them nicely: https://github.com/pipxproject/pipx

So i meant

`pip install --user pipx && pipx install magic-wormhole`

Well I'm glad I asked, because that sure is neat!

pipx is rad. `pipx install ipython && pipx inject ipython numpy pandas attrs pendulum toolz` is ... very nice.

I remember elementaryOS had a GUI for this in its app store. Never got around to try it, Linux is not well known in the consumer world, let alone Elementary

Very clean and nice, but how is this financed?

That is, who's paying for the server storage and the bandwidth?

First off, Mozilla believes in the service. Mozilla itself gets funding from donations and corporate backing (I think). The cost of bandwidth is small compared to other file share sites in that the files stored are temporary. The transient nature of the files means that the max storage space needed is relative to the concurrent number of users. Bandwidth also. That means sans a very clever DDoS their expenses should be manageable compared to say Google Drive, Dropbox, or MS One Drive.

I remember sending a signed PDF via Firefox Send and was at first horrified when I realized I couldn't get the file back after 24 hours but then relieved knowing that the recipient got it and then it disappeared from the internet. Very cool!

I believe that most of Mozilla's revenue comes from Google profit-sharing, because they make it the default search engine.

If they can't keep up, at least we'll always have the code: https://github.com/mozilla/send

I don't understand how they can afford the bandwidth...

If this were on AWS it would be around $0.09 per GB for downloads.

Which is why you don’t host it on AWS. Wrong tool for the job.

Their Github page specifically mentions AWS S3 as a requirement. So they are using it.

It mentions "AWS S3 or compatible service". The S3 API is a de facto standard for object storage services, and there are numerous implementations of it.

(Including many which you can easily self-host like Minio, for those who are following along at home and weren't sure whether that was just limited to other cloud services)

Is there a cheaper S3 alternative that you recommend or that Mozilla's likely using instead?

I think Backblaze's cloud storage is the cheapest I've seen. Microsoft and Google would also be a bit cheaper than S3

As you mentioned, huge fan of Backblaze B2. No affiliation, just a satisfied customer. Can also pair it with Cloudflare for even less expensive traffic serving.


But it's not s3 compatible.

One of their KPIs is: "Percent of users who have or create an FxAccount via Send, Why: representation of % of any service users who might be amenable to an upsell"

From this it seems that their moneymaker is the new Firefox account creations that will be driven by this service, to whom they can then upsell. But it doesn't state what they are trying to upsell. Anyone got any idea what that might be?

I would imagine that to be Firefox itself and Firefox for Android/iOS. I've seen people easily set up syncing on Google Chrome because they already have a Google account to which they might even already be logged in, while they're completely unaware that Firefox has a similar feature.

If you already have a Firefox account, the barrier to using Firefox Sync is lower, and with that, the barrier to using Firefox for Android/iOS is lower.

We'll find out by the end of the year »

Secondary - In support of Revenue KPI

We believe that a privacy respecting service accessible beyond the reach of Firefox will provide a valuable platform to research, communicate with, and market to conscious choosers we have traditionally found hard to reach.

We will know this to be true when we can conduct six research tasks (surveys, A/B tests, fake doors, etc) in support of premium services KPIs in the first six months after launch.


Presumably Mozilla, just like they do for the sync and Web Push servers.

I think what the previous poster meant was 'why' are Mozilla paying for it?

Ah. In that case, I seem to recall they performed user research among users of their browsers that uncovered that sending files to others was still a major pain point. It's also a way in to promote a Firefox account, and Firefox in general.

Of course Mozilla's not in it for the money, so there's not a direct line from Send to more revenue. Firefox is their main tool to protect the open web, and Send is a way to get more people to use that. And of course, being able to send files encrypted is good for the web as well.

Indirectly, it is primarily financed by the search engine deal in Firefox.

The open web is more than just a browser. The cost is minimal when you have your own infra instead of AWS’ bandwidth gouging.

Another user pointed out that Firefox Send is written to use an Amazon S3 compatible API to run. That could mean that Mozilla is using AWS for the service.



> Key Business Question to Answer: Is the value proposition of a large encrypted file transfer service enough to drive Firefox Account relationships for non-Firefox users.

The metrics section is interesting https://github.com/mozilla/send/blob/master/docs/metrics.md

Oh interesting. Their two hypotheses (which they will test) are that Send "can drive Firefox Accounts beyond the Firefox Browser" and that it will "will provide a valuable platform to research, communicate with, and market to conscious choosers..."

It sounds like they're investigating a premium service offering targeted at privacy conscious users. (The secondary hypothesis covers "revenue" and will be tested by conducting "research tasks ... in support of premium services KPIs.")

This is perfect! I'm currently taking a networking class where we generate trace reports, and I've just realised how tricky it is to send files without logging in (I'm just averse into doing that in a machine that's not mine). I can email my trace files, but I need to login, I can store in dropbox/drive, but again I'll have to login.

I wish they added a QR code option as well. It would be perfect for quickly copying the link by snapping it with my phone so I can download later.

It's a fantastic service and I'm glad to see it leave the experiment stage and become official. Highly recommended.

I really don't understand why they didn't share a link to the repository in the article. For anyone who's interested - here it is: https://github.com/mozilla/send

It's because this blog is for mainstream audiences who don't know what GitHub is and might be scared of all that code-y stuff if they accidentally clicked on it.

I'm not so sure about that. I have a difficult time believing that anyone in the "mainstream audience" would take the time to read Mozilla's blog posts, or more generally the blog posts of any tech company.

The same idea (e2e decryption key in fragment/hash) is used by the self-hosted Lufi. Public instances are running at https://upload.disroot.org/ and https://framadrop.org/ and the code is here: https://framagit.org/fiat-tux/hat-softwares/lufi Maybe someone can comment on how Lufi compares to Firefox Send (performance, usability?)

I also think the blog post could explain more why and how the e2e encryption works. Maybe just by showing an example link and then highlight with colors "this part is private"?

I've been using send.firefox.com for months and so far the only downside was the 1 day expiration. Very glad you can now opt for 7 days.

This is awesome for sending private documents to family (tax season, anyone?), especially when your family isn't inclined to learn cryptography to set up their own solution. Will be trying this ASAP.

Open source peer-to-peer solution in the browser using WebRTC: https://file.pizza/

Wow that's really neat. Downside is it only works while the page stays open on the uploader's machine, while send.firefox.org uploads the file for a limited time to a central server so you can close the tab before the recipient downloads it.

If I've got this right, the file is encrypted using a secret key which is generated on the client and appended to the anchor in the link, like:


Anyone who obtains the link (e.g. via email interception) gains access to the file.

Since browsers don't transmit the anchor when requesting a resource [1], Firefox servers never see a copy of the key. Provided you trust their JavaScript.

[1] https://stackoverflow.com/questions/3067491/is-the-anchor-pa...

> Anyone who obtains the link (e.g. via email interception) gains access to the file.

True, but, if a third party decides to use the intercepted link to download the file, and you have it set to a limit of 1 download, the file will self-destruct (if you trust Mozilla). This way, the recipient can know that someone has tampered with the communication, which is certainly an improvement over the status quo (email attachments).


How do they handle abuse though? Like, people using it to host, say, pirated TV shows? Maybe a max download limit that makes it impractical for that use case?

The files are available up until they have been downloaded (from 1 to 100 times) or until a certain timeframe has elapsed (from 5 minutes to 7 days). See the screenshot at the article.

We are working on a plugin for BitTorrent that will automatically re-upload a file to Firefox Send when the old link expires and then make the new link available in the torrent.

Why? Why wreck a good thing so no one else can enjoy it?

The torrent protocol is already there. Don't put that cost on the Mozilla Org.

I certainly hope that Mozilla can/will detect and punish this sort of abuse.

This is why we can't have nice things.

2.5GB file limit is a bit small for good quality TV shows (and especially movies).

With HEVC 2.5GB is perfectly fine for a 2 hour movie.

Maybe for 1080p. It's 10-15GB for 4k.

I think we can all agree 1080p is "good quality", right? Your average pirate probably isn't demanding 4K quality, I imagine.

I'd be amazed if a majority of people were downloading 4K movies these days.

Yeah, the majority are probably not, but i'd prefer to download at the highest quality available.

Then you understand "How do they handle abuse though? Like, people using it to host, say, pirated TV shows?" remains an open question.

Depends on the runtime. For low quality Netflix dumps (from Netflix, not because it got reencoded) it's usually enough, even for 1080p.

One can always split files.

winrar and 7-zip can break up files into chunks of any size you specify.

Even single episodes?

That should be fine, though I'm pretty sure I have some episodes that are above that.

Another neat feature actually built into Firefox is Take a Screenshot. To the right of the URL field, in the three dots menu. Option to save it locally, or save in the cloud with a URL with some expiration options. Sorta like a pastebin for screenshots.

It only takes screenshots within the confines of a Firefox window.

Glad you like it (I worked on it). Just a side note, the cloud service will be going away in the future, but the ability to save it locally will remain.

This service is really great and I'm sad to read it will go away - but of course, it makes no sense from a money perspective to keep it free forever. But the initial idea - saving screenshots made by the browser to the cloud is fantastic, and much more convenient than the myriads of SAAS that provide this kind of service.

Replacing the cloud bits with Firefox Send integration seems a fairly obvious idea then?

That has been discussed :-)

I've used this before to send sensitive documents to my attorney, who would have otherwise just wanted email attachments. It worked great.

Based on what I've read, the security model seems to be almost the same as email attachments?

One really big advantage of Send over attachments is that you don't have seemingly immortal copies of the files hanging around in people's mail clients and/or IMAP servers.

Why not "Mozilla Send"? If Firefox the browser isn't a requirement, the name is confusing.

The same reason it's Chromecast, not Googlecast.[1] Branding.

[1] The protocol is named Google Cast, but all the consumer branding is Chromecast.

I was thinking the same thing but in Google's case, Chrome is the dominate browser and most people recognize it as something they already have. In the case of Firefox, it's more likely they'll recognize the name specifically as the browser they don't have and will think they can't use it.

They'll recognize it as the browser they don't have any maybe should get because it's now positively associated with cool new features like this. :)

I think what helps is that there's two (or more) parties to a file transfer: the sender and the recipient. Someone who uses Firefox might start using Send, and then the recipient(s) finds out that they can use it too. And if they're using Send, the might start to consider using Firefox, or to create a Firefox account first.

I was confused too. When it worked on non Firefox browser it was a pleasant surprise. I'm guessing this is just to promote Firefox browser. Wouldn't surprise me if they added higher file limit after usage grows and with it a paid tier :)

It would be really amazing to build some sort of integration in commonly available WiFi connected scanners and printers.

Currently, my scanner conveniently sends me emails with scanned documents. But I have not insight into how they actually store and delete the document on the backend.

Would be great if the scanner had the option to upload to Firefox Send and show me a QR code to download it on other devices.

How is this using end-to-end encryption? It seems like the recipient just clicks a link to download. How can it have been encrypted for that person? end-to-end encryption normally means that there's no way for the intermediary to unencrypt the data but I can't see how that's possible in this case.

IIRC The link contains an anchor `#abc123` which is the decryption key. Browsers do not send anchor parts of the URL to the server, and so the browser decrypts.

Hinges on the browsers never sending that key, though.

Client side JavaScript that encrypts locally befor uploading and puts the encryption key in the url you share with someone that never gets sent to Mozilla. Also client side decryption on the person you shared the link with. It’s end to end.

With the caveat about client side browser encryption in general, which I'm sure someone will pop in here and explain in detail. :)

The url effectively contains the decryption key, so the web server could be set to capture the urls and decrypt files.

If you want, you can also set a passphrase on the file to share via another channel

That's why the key is in the hash part of the URL; the server can't access that (unless it also sends client Javascript that parses it and sends it back to the server, but that could be detected).

What if I'm on a network I don't trust? Is the only option to set a passphrase? More importantly, the UI doesn't call this out explicitly, so uninformed users may think it's "secure enough" without a passphrase.

The browser will never send the key across the network by itself because it is in the fragment. Of course, you have to get the url with the fragment off your computer and to the intended recipient, so a MITM of this communication could intercept and download the file before the intended recipient. The intended recipient would know that this has happened, though, as the link will then be expired (assuming it was set to 1 download); if this is a fear, I would suggest adding a passphrase and sending the passphrase out of band, for example over a voice call.

The anchor hash/fragment (#hello) isn't sent over the network.


"(...) the fragment identifier is not used in the scheme-specific processing of a URI; instead, the fragment identifier is separated from the rest of the URI prior to a dereference, and thus the identifying information within the fragment itself is dereferenced solely by the user agent, regardless of the URI scheme."

Ed: as for an untrusted network, tls should be able to secure that. Except if the network owner can insist on/enforce a tls stripping mitm/proxy.

> The url effectively contains the decryption key, so the web server could be set to capture the urls and decrypt files.

If that's the case, I think setting a passphrase should be mandatory. Proxy servers are extremely common at every workplace. Since they probably log all requests, they will capture all keys in the URL.

The key is in the fragment and thus is not sent to any server.

Much of the data I share with friends using dropbox is on time-limited data in the 1-2 GB space.

For certain reasons I get a ton of dropbox space, but for my friends, data quotas kick in on even simple files shared like this.

I believe this is a primary upgrade mechanism for DB--I'd say this new firefox offer is in competish.

How do they pay for the storage costs? What's the upside for Mozilla?

> How do they pay for the storage costs?

Using their revenue from search, like everything else they pay for.

> What's the upside for Mozilla?

"Our mission is to ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent."

Upside is that this is another reason to get a Mozilla account.

Google Search, Yahoo Search.

I had the expectation that it would use WebRTC before opening the link, disappointed on that side. But really glad of the privacy minded offer. I appreciate Mozilla's work and effort towards a more private and encrypted internet!

WebRTC and privacy don't exactly go together well.

Sharesecret (my company) provides a similar service, along with a slack extension for anyone who needs a commercial product. https://sharesecret.co

I'm onboard as a regular user of send.firefox.com. How does Mozilla have the money to offer this for free?

Maybe they just don’t need too much storage since they expire quickly. This would be an interesting thing to graph, if they release the statistics.

Mozilla has quite a bit of money, most of it from their default search engine deals. I'd wager to guess that most of it goes to wages.

Net income 2017: 89 million. Not that much for a company employing more than a thousand employees. But impressive for a corporation that is 100% owned by (and allegedly managed like) a non-profit org.

Does Firefox Send work on browsers besides Firefox for sending and receiving files? It's blocked at my office, so I can't test it.

Yes. Tested on Chrome, Safari, and Edge.

Oddly, it doesn't work for me (FF 65.0.2, windows 7) -- I just get an inert white rectangle in the middle of the screen. I tried turning off ublock origin and DNT settings, but it still is just a rectangle.

It works on chrome, and does not work on IE 11 (win 7 doesn't support edge)

This seems to be a known bug if you have used the old version of send in your profile that may be fixed now. If you try it in a private browsing window and it works, it's probably that bug.

Sure enough -- it works in a private window. Is there a known fix for this, or do I need to create a new profile?

It's been fixed and the fix is deployed to prod. Might need to clear some cache.

The page states that it'll be available on all browsers and a android app is going to be released later this week.

Swisstransfer.com is more or less the same, but with 25Gb and no sign up

Regarding the differences, this website does not seem to encrypt the files on the server, and does not provide links directly, so you need to provide at least one valid email address, if only to send the link to you to then send it to the party you want to share the file(s) with. It's also not open-source AFAICT.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact