Firefox’s JS client requires the whole file be in memory in order to perform the encryption, and has to decrypt the whole file in memory in order to download it. My client doesn’t have that limitation so it could theoretically upload much larger files (subject only to the server’s upload limit).
Mozilla does not have the ability to access the content of your encrypted file [...]
Exactly. One has to trust Mozilla every time one visits the page. They could easily configure it to be malicious one time out of a million (say); what are the odds that they would be caught?
Web-page-based crypto is fundamentally insecure, and Mozilla is committing an extremely grave error in encouraging users to trust it (as they also do with their Firefox Accounts). Security is important, and snake-oil solutions are worse than worthless.
Is it perfect? No, it isn't. But it is still a considerable improvement.
If you have a better solution in mind for the average user crowd, feel free to suggest it, of course.
As a show of nothing-up-the-sleeve, a service asserts that it's in a stable state and will continue to serve exact copies of the resources as they exist now—that they will not change out from beneath the client in subsequent requests. When a user chooses to use resource pinning, the browser checks that this is true, and if finding a new deployment has occurred, the browser will refuse to accept it without deliberate consent and action from the user (something on par with invalid cert screen).
This means that for a subset of services (those whose business logic can run primarily on the client side), the users need not trust the server, they need only to trust the app, which can be audited.
When deploying updated resources, services SHOULD make the update well-advertised and accompany it with some notice out of band (such as a post about the new "release", its changelog, and a link to the repo), so the new deployment may be audited.
When new deployments occur, clients SHOULD allow the user to opt for continuing to using the pinned resources, and services SHOULD be implemented in such a way that this is handled gracefully. This gives the user continuity of access while the user (or their org) is carrying out the audit process.
Areas where this would be useful:
- HIPAA compliance
- Web crypto that isn't fundamentally broken
- Stronger guarantees for non-local resources used in Web Extensions—in fact, the entire extension update mechanism could probably be more or less rebased on top of the resource pinning policy
Caching proxies would suddenly become viable again because only the first download has to through HTTPS while "I don't have this in the local cache anymore, can you serve me this content" requests could go through sidechannels or outside the TLS handshake or something like that. Caches could even perform downgrade non-attacks.
When it's secure, it's an improvement; if Mozilla, a Mozilla employee or a government which can compel Mozilla employees chooses to make it insecure, then it's worse than insecure. At least with something like Dropbox users (should) know that they are insecure and should not transmit sensitive files.
> If you have a better solution in mind for the average user crowd, feel free to suggest it, of course.
The functionality should be built into Firefox, so that users can verify source code & protocols once and know that they are secure thereafter.
And trust that Mozilla won't randomly distribute a backdoor to 1/n of users?
The means you're suggesting aren't possible to implement for most people today. If you care about real-world impact I would recommend thinking of other strategies.
Solving the problem for proprietary operating systems that intentionally have horrific systems of managing the software on said operating system is harder due to an artificial, self-inflicted handicap. "Just" switching people to Linux is probably easier (hey, Google managed to get people to run Gentoo after all).
If your solution is to switch the entire world to Linux then you may want to figure out how to do that. Many have tried and failed before. Good luck.
The reason for it, is that if you trust the site to be secure it might be devastating once security is compromised, but with no security you're typically more careful.
As a Windows user I mostly use 7-zip for this purpose, or the encryption plugin in Notepad++ for text.
While I agree that doing it manually is the only reliable way if you're going to send it over an insecure channel, if the channel is secure then it's much easier for an end-user to just send it in the app.
You just need to trust. So what's wrong in trusting Mozilla, if you can easily trust your encryption/decryption software?
This is one of the goals of the legal system - make it so we usually trust each other. There are no real long-term technical solutions to this problem.
So if you want to make sure you're safe, read their EULA or equivalent.
Not to mention that Send likely doesn't have a warranty (like most software under a free software license).
Using in-web-page crypto gives users a false sense of security. This is, I believe, a very real problem.
What WebCrypto guarantees is that it is truly Mozilla's code that you're trusting, since the WebCrypto APIs are only available in a secure context (HTTPS or localhost).
No, because I use the Debian Firefox, which means that I'm trusting the Debian Mozilla team. I feel much better about that than about directly trusting Mozilla themselves.
I don't trust auto-updates.
About the auto-updates. CCleaner recently had an incident where their version .33 something had a backdoor injected by some 3rd party. If you downloaded version 34 you were safe. If you loaded 32 and configured it auto-update you got the malicious update. But that didn't affect the auto-update setting as far as I know, so if you had it on you would in about 2 weeks time have gotten an automatically fixed clean version.
Point: The worst situation was if you did not have auto-updates on and downloaded v. 33. Then you were stuck with that until somebody told you you had malice on your machine.
You're damned if you do and damned if you don't.
That's reasonably for a technically savvy user, but the vast majority of users do not use Debian. They use Windows or OSX and rely on trusted corporations like Apple, MSFT, Google, and Mozilla to keep their systems patched.
Which is still a bad idea to trust, so I'd say that it is a logical extrapolation.
I mean, sure, you can never use an auto-updating application again and always manually review system updates before installing them. But realistically, I don't see anyone besides Richard Stellman adopting that lifestyle.
Not if you're using a Linux distribution's browser packages (we patch out the auto-update code because you should always update through the package manager). And distributions build their own binaries, making attacks against builders much harder.
While people might trust auto-updating applications, they really shouldn't. And there are better solutions for the updating problem.
Sure, nobody ever said otherwise. But that doesn't mean it's a good idea.
The software I use is open-source, so I can see what I'm running, and what updates I get. I also don't use any auto-updates. The web is inherently different in that I can't really guarantee that the code I get is going to be the same code that you are getting.
For most people your advice is quite plainly wrong. Most people should have everything on autoupdate.
Or you can auto-update to a version which is malicious. Then you are screwed. But your previous version you downloaded to start with might have contained the threat to start with. So just saying don't auto-update does not really protect you from malicious versions. Auto-updating does mean that you get updated security fixes making you less vulnerable.
The original non-updated version can be malicious even with a vendor you think you should be able to trust because it is a popular product used by many others:
How could they do that easily? Their source code is public, and many third parties work on it and produce their own compiled versions - plus every security people tracking unexpected connections would catch it.
This is where IPFS could be useful. It's content addressed, so the address guarantees that you're getting the same, unmodified, content
Bear in mind they also make the web browser.
It's wildly different from a JS file that's loaded every time you visit the website.
Unless Firefox provides fully reproducible builds on your platform from an open source compiler, you have no guarantee that the binary you have is built from the public source code. You have to trust Mozilla.
Without reproducible builds, compiling the source yourself would be the way to go.
Anyway, I agree that it should be clear that this file sharing service, while convenient, essentially requires you to trust Mozilla with your data. The claim "Mozilla does not have the ability to access the content of your encrypted file..." is fragile.
Well, the advantage of the client is that you can inspect the source, so you can verify that it doesn't actually access location.hash.
As long as websites have no fingerprints that encompasses all loaded resources and can't be pinned to that fingerpint crypto in the webbrowser is not trustworthy.
If you don’t trust Mozilla or you are sharing information that a nation-state attacker would coerce Mozilla into revealing, then you’re already set up to encrypt the file first yourself - at which point you can send it with any service, include Firefox Send.
Or you (and some friends from organizations like the EFF and FSF) can read the source code to see what it does, and even compile it yourself. If you do that, you only need to trust the compiler.
Here's the original discussion: https://news.ycombinator.com/item?id=2935220
Since boring crypto shouldn't have weird failure modes like this, I'm thinking this design is a big mistake?
EDIT: I think it was Instawallet and apparently while they had robots.txt set to prevent crawling, the theory was people typing their URL into Google (or Omnibar) would alert Google to the URL and it got into search results anyway.
I know that web-keys is based on the theory that since the fragment isn't sent by User-Agents in the Request-Uri that it's secure, but there are things that see the full URL which aren't conforming agents, and it just seems risky for any long-lived secret.
It seems that bitorrent protocols are pretty close, but I don't think there is a seamless client that allows for "magical" point to point transactions.
SyncThing is a free software alternative, but I wouldn't say that it is usable (yet) for the average user.
Syncthing is simply not designed for the mainstream. Resilio is. If Syncthing devs can fix that, I'll gladly start using it over Resilio with my non-technical friends.
(There's also a closed-source iOS app called fsync(), but I found it too slow/unstable to use.)
Previously on HN: https://news.ycombinator.com/item?id=14649727
You need to connect to another computer.
First you need to know a route to that computer. If it has an externally reachable IP address and you know that, then great. If it has an externally reachable IP address and a DNS entry and you know _that_, then also great.
If you don't know the IP address or domain name of the other computer then you'll have to do some kind of lookup/exchange to find it. That means some kind of centralised service to provide the lookup functionality.
If the other computer doesn't even has an externally reachable IP address then a central service is going to have to act as a connection point which you can both connect to (or provide some other method of helping the two of you connect).
I'm not aware of any entirely decentralised system which would allow two computers which are behind NAT to find and then talk to each other. Or any obvious design which would work there.
Three examples of this, implemented and working:
Ongoing work is happening, for example by the webtorrent folks, to remove this constraint.
Do these services also act as connectors in case of double NAT?
Would that meet the requirements without being a "central" service?
Just take any .onion domain and append it with .to and it should work. examplelettersblah.onion becomes examplelettersblah.onion.to and that works without needing to muck about with Tor.
This would be a horrible idea if you're concerned with privacy/anonymity. But, it'd make it easy for people to download your shared files.
then you're clearly not the person we should be asking to build this kind of thing are you? :-)
my proposal actually solves two important problems with peer-to-peer systems, and I barely have to write any new code to make it work! the solution? i2p! it's an anonymous mix network, much like Tor, but completely decentralized. using an intermediary dex mix network fixes the nat issue and prevents you from leaking your IP address to other peers.
(Serious question - I've not used it myself, and I'm intrigued at the process.)
endpoints are identified by their public key hash. each endpoint maintains a set of anonymized routes. this routing information is stored in a Distributed Hash Table (DHT). if you want to connect to another endpoint, you lookup the route for the public key hash, build an outbound route, and you're good.
more concretely, an ipfs transfer would work by using public key hashes in place of IP addresses to identify peers and a known set of endpoint keys for bootstrapping.
I like it.
Now, is it available today in a form (as the OP put it) which "A person whose computer knowledge extends to using facebook" can use?
Am I missing something?
1. click bit torrent
2. click create torrent
3. type the path to your file
4. share magnet link
- IPv6 kind of helps here, at least if we hope that no NAT standard ever makes it into IPv6. Crossing my fingers.
- There does exist at least one NAT hole-punching technique that can traverse two NATs with no central server using ICMP-based holepunching and UDP. Obviously, like all hole punching techniques, it only works on certain kinds of NATs, and firewalls can kill it.
I think you're referring to this? Clever hack indeed.
The fact that it doesn't support NAT is seen as a risk in such environments ( which rightly or wrongly consider it as a second level of firewalling ). Some deployments that I've seen use only site-scoped with no globally-routed prefixes. Everything Internetty has to go over IPv4, which makes the Security team's job easier; drop all IPv6 at the DMZ and drop all non-NAT IPv4.
I think the GNUnet has some stuff in it which can even work across different protocols, e.g. hopping across TCP/IP over Ethernet to Bluetooth to packet radio. It has some neat stuff where, as I recall, it probes out a random path, then tries to get at its intended destination, remembering successful attempts. It definitely sounds cool, but it's not currently in an end-user-usable state.
> I'm not aware of any entirely decentralised system [...]
What's the user value of having something entirely decentralized? I see the value in making sure the actual file transfer doesn't go through the central rendezvous. But I don't understand what's gained by eliminating the use of a little help setting up the connection.
instant.io works pretty well for me, it works on the bittorrent protocol, but over webrtc
Or run a local SFTP server with UPnP and avoid the extra complexity.
In terms of finding each other, the client could get its external IP, open up a UPnP port, and provide the user with a QR code or brief snippet to paste into a chat conversation, which the other user would feed to their client to initiate the file transfer.
You could do this commercially by providing an external website with which the clients could communicate out of band to set up their transfers (for free), and (for pay) optionally provide a cloud sync service (which would help person A "pre-send" a big file transfer until person B was available to receive it).
While some of this is available today, you're right, I don't think there are accessible clients. ICQ, AIM and IRC used to be the solution, as they all did P2P file transfer. But now everything is on the web, so everything sucks.
Even if both parties are behind a firewall that won't respect UPnP, there's UDP hole punching which should usually work. Mozilla would just need to host a server to handle the port number handoffs.
I imagine this would handle 90% of the use cases. The rest would use traditional 3rd party hosting, IM transfers, email, etc.
I don't disagree with you but I kind of expected a note as to why this wasn't a solution in this context.
I think a solution to this problem that wasn't tethered would be something like in an IPV6 world where everyone has an IP address, if I could just put in your address, send you a file, and as long as our computers were on/connected to the world wide web, it'd get to you.
Aside I have recently noticed visits to ecommerce sites sit near permanently on top of my Firefox history than other sites, is Firefox doing deals with Amazon and others, and has this been been disclosed?
Doesn't work in safari though (haven't tested ie) so maybe it is not general enough for your use case.
The problem is that there are none with enough of a user base to have a network effect, and users hate installing new apps.
It's 2017 and it's still hard and somewhat dangerous to install apps.
I learned about Firefox Send when it launched but completely forgot about it until now. I would definitely use it more if it had an easy-to-access (read: not burried in the Settings page) shortcut in the browser.
You could abuse it by sending the same file many times in order to create lots of download links; but there's little to gain in bandwidth savings: you might as well run your own server. The only advantage I'd see is hiding your IP address, but then you could also run a tor hidden service. The other would be bandwidth amplification by synchronizing all the clients (for big files).
I ran the Send link through Typer.in (specializing in hand-typed urls) and it worked as I initially expected. However, it would be nice if Send had this functionality by default.
The Send link must include the secret key, because no one else should get it, and that key must be of sufficient length to protect your file. Thus human-readable-izing it could do nothing to decrease its complexity and would just result in a huge string of words that were just as much as a pain to type in.
Or, if you are a Telegram user - you can send the link you yourself.
You can also use you mail inbox.
There are I suppose two use-cases. 1) To send to yourself. 2) To send to others. The second use-case, is fine under the current workflow, but the first use-case the workflow is annoying.
If using Firefox Sync, perhaps the link could be synced over, maybe?
You can instead save it as a self-decrypting document, attaching it to email, copy to thumbdrive, upload dropbox, etc.
Something like 3 downloads limit would make this more usable!
I remember them telling me they are now going back to their core competences. I think it was after Firefox OS failed.
Not trying to piss on anyone's parade here, just wondering how this kind of thing keeps happening. I was wondering the same thing when Mozilla added Pocket and now Cliqz to Firefox.
What is the rationale here? Do they have leftover money they need to spend before January 1st or something?
Firefox is Mozilla's flagship, and the largest by far way in which we achieve our mission, but our goal is a healthy and open internet.
Additionally, this is a great way to determine whether something like this would work well as an in-browser feature, and we've built it in such a way that it works in more browsers than just Firefox on day one.
Sure wish other browser vendors would consider other browsers when releasing their products.
Most of the wealth in silicon valley comes from productizing eyeballs.
In related news, this just triggered someone into downvoting my entire post history!