But some of you have, and I figured they make for fun and interesting stories ;-)
So what's your favorite method to send large files, of at least 5GB or bigger? Though, I'm also curious on how you'd send 10TB or more.
Transferring the torrent metadata is pretty trivial and can be done via a wide range of methods, and having that flexibility can be nice.
Unlike HTTP, you get reasonable retry behavior on network hiccups.
Also, more robust data integrity guarantees, though a manual hash test is probably a good idea either way.
At least among people I'm throwing TBs of data around with, torrent infra is common and it's nice to not have to deal with some special-purpose tool that, in practice, is probably a pain in the ass to get compatible versions deployed across a range of OSes. Basically every platform known to man can run a torrent client of some sort.
And obviously, no dependency on an intermediary. This is good if you're trying to avoid Google et al. That does, however, bring a potential con: if my side is fast and your side is slow, I'm seeing until you're done. If I'm uploading something to gdrive or whatever, I can disconnect one r the upload is done. If you control an intermediary like a seedbox, that's less of a problem.
In general, though, torrents are pretty great for this sort of thing. Just encrypt your stuff beforehand.
For this reason I prefer using something like Syncthing which is designed more with this purpose in mind.
The inspiration for this tool is for assist with LAN parties. The one thing I have against the private flag is that it also disables LAN peer discovery, which would be okay if you use a tracker behind the LAN (though the torrent would need to be modified if the hostname/IP changes). Since you can't configure other people's clients I found it simpler to use Aria2 that is preconfigured to disable PEX and DHT.
Combine this with Metalink or Web Seeds, so that you can have a initial seeder based on HTTP. I think using IPFS would be a great web seed as long as the their gateway and Cloudfare's continues to stay up. IPFS creates a permanent URL if you will, so no need to worry about dynamic IPs or domains. It would be great if IPFS would have a smaller binary (right now it is around 20MB compressed) and had a way to get a file and "seed" it like a torrent. But for now I think torrent is mature enough.
Resilio I believe supports it but I would prefer an opensource alternative.
They have MuWire now:
Not private but public sharing.
Does it actually block trackers from picking it up or just like tell them not to if they want to obey it?
Note: I was using syncthing for a one way sync all the time, with send-only and receive-only settings on the endpoints.
This sounds like a pretty severe bug, so if you still have the systems you were able to produce the behavior on it might be helpful to reproduce it and submit an issue.
I don't know enough, maybe it has security issues, or some limits... but the fact is that it's still widely used, by tech literate people, so it still stands the test of time and it must be doing something right.
"Version 9.0 [...], released on June 20, 2006"
"Opera introduced [...] a built-in BitTorrent client" 
That's history now, of course, courtesy of today's browser monoculture.
I know it used to have one, but that now that I'm aware of the v57+ extension issue, I notice that the one linked to first in a search seems to be dead.
Back when Resilio Sync was known as Bittorrent Sync, I proposed it as a file-sharing solution between multiple locations for the company I was working for. Immediately returned with an emphatic refusal to use any kind of software that had "Bittorrent" in the name, regardless of what it was for.
This is the same company where they picked up I had "Bittorrent Software" installed on my work laptop, and the people who ran the auditing software freaked out because, you know, it's Bittorrent! I told them it was a single icon in my PortableApps folder but they just kept saying "There's a Bittorrent client on your PC, you could be fired!" until I deleted the icon.
So yes, the fact it's associated with piracy has made a huge difference in some places.
> ... a browser-based WebTorrent client or "web peer" can only connect to other clients that support WebTorrent/WebRTC.
This essentially splinters the world into the bittorrent and webtorrent halves, which makes it far less useful.
The DHT remains split for now.
Steam could probably benefit loads from torrents but they do have extremely good distribution servers already.
- updates of software.
- distribution of public video files.
- distribution of install images.
- distribution of raw public measurements (ea science data).
Basically BitTorrent is the protocol for any usecase with 1:n (or n:n) with public data (ea. No data access controle is needed).
Copyright has nothing to do with it and the nature of BitTorrent does make it also a good fit for some pirates.
Never forget that the following is shared over BitTorrent:
- World of Warcraft updates.
- Linux distribution install images.
- haveibeenpowned sha lists.
And that’s just what I personally know. I know there is more.
Seriously. Zmodem/resume was doing better in 1986.
The HTTP server needs to provide optional support for range requests on that resource. (RFC 7233) Not all do.
- Fully p2p, even the signalling server part which is done via Tor hidden services (later on there will be an alternative using a DHT).
- So unless the Internet or Tor goes offline it can never be down.
- Works with proxies and in corporate environments when UDP might be blocked.
magnet:?xt=urn:btih:[this is where the hash goes]
I have no trouble seeding in swarms, only when I create torrents. Other than that, BT is ideal.
This is such a facepalm moment. "This service is awesome, it does everything you need, it's reliable, it's safe, it's practical, and all you have to do is encrypt your files, share encryption keys, validate identities, and establish trust with the person you're sharing with. Also if you screw it up in any way, your file is public forever. It's so rad!"
You can literally say that for ANYTHING out there. Not sure what point you are trying to make. Let me guess, "Encryption is hard?". Well there's no free lunch.
Encryption is hard, which is why it's sad to see that when someone gets it right and nails the UX, it still gets ruined by the real world somehow.
5GB on Internet: Upload to my OVH VPS and HTTP or Rsync it to its destination
10TB, local or Internet: Physical Hard Drive.
Never underestimate the bandwidth of a station wagon full of backup tapes!
And since you now buy 1TB micro SD cards, so perhaps I'd split the file 11 ways (no way it'll fit exactly) and send them via carrier pigeon. Or heck, I could just tape them to a drone and hope they aren't damaged in the crash. There's lots of ways to move data around. Maybe you want to UUENCODE it and submit bad URL's to a servers log so that it can be exfiltrated later? It would probably take a very, very long time, but could be done. I call it "CURLyTP"
In Poland there is an option to send package via train for 8$. Just go to station and give package to conductor/train guard, few hours later receiver can get it at destination station. Sending HDDs full of raw video footage this way is very popular among video editors here.
Over $20 for a 1 lb package, over $60 for a 50 lb package from Oakland to Los Angeles and a quoted time of four days station to station.
I can overnight a 50 lb package for $35, door to door, over that distance with the appropriate FedEx account and discounts. The ground service would cost me $30 without any discounts...
As many as can safely fit into conductor's compartment. They can refuse to take package if it would get train delayed - so better have exact amount of cash because card readers on trains are extremely slow.
How does the receiver know when their package is coming? (other than the train timetable)
Train timetable only. For most trains delay information is published on operator's website. Delays are rare, except holiday season.
If receiver misses their package it goes to closest warehouse, then receiver is called and can request delivery by another train, but needs to pay another 4$ on pick-up (plus fee for every day package spends in warehouse).
I'm skeptical of this, since when I order microSD cards online they arrive via USPS and have a perfectly valid partition table and FAT filesystem on them.
Could you imagine having your laptop and camera wiped every time you jumped on a plane.
The exact quote  by Andrew Tanenbaum is: "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.".
Obligatory xkcd .
I met Andy Tanenbaum in 2008 at Vrije Universiteit in Amsterdam. I was the Technology Evangelist for AWS in Europe, giving a seminar there on cloud computing.
I loved him. Funny, smart. I didn't even know he was American until I met him in person (having studied computer science, you're not exactly sure of someone's nationality solely based on his last name).
Nothing wrong with that. Some my best friends are American
A 2020 update for that would probably use 256 or 512GB cards resulting in around 4.4 or 8.8EB (And probably twice that again if the cargo capacity of a typical 2020 SUV equivalent of a 1985 Volvo is assumed...)
The data transfer rate, however, scales linearly with cardinality...
If it is above 50G and on my personal computer, I encrypt the data and then physically mail a USB stick with the data on it. Trying to arrange the downloading/uploading of 50G of data from my personal computer to another personal computer is a real pain. The people that I would send/receive that much data to/from, that is stored on my personal computer are usually people who don't know much about ftp, scp or easily sharing files over the Internet. Sending a USB stick is just so much easier and in many cases, faster. I make sure the recipients know how to decrypt the data before sending the data.
If it is on a server (For example, a side project instance in AWS), then I drop it into S3 bucket, export it and send the URL to the recipient. I just eat the cost of them downloading it. Usually I am making money off the work anyways, so it is the cost of doing business.
Consider BunnyCDN way cheaper than s3.
BunnyCDN does look good though.
I do the same thing with OneDrive. Of course that's the same thing and I'm just agreeing with your answer, there are any number of DropBox-like services (including DropBox!). OneDrive is nice if you have Office 365 because you can password protect a link and send the password via a separate channel, which makes the process pretty secure (if you trust Microsoft with your data).
Between OSX, AirDrop works very well. I have sent >10GB files between Macs, quite quick as well.
I have never send a 10TB file so I wouldn't know. None of my drives are that large yet :)
Partially out of my annoyance with this I implemented a port of Magic Wormhole in Go that fully interoperates with the official python version: https://github.com/psanford/wormhole-william
I chose not to copy the features of magic wormhole because it seemed to me (at the time) to be very complicated and lacking in features crucial to myself (namely restarting transfers , sending folders without zipping them).
This has been to my benefit, I think, because I can iterate on croc quickly without having to conform to the wormhole spec and I've since added features still not available in wormhole (like restarting transfers , and ipv6 support ).
That being said, I really like magic wormhole and consider it my inspiration. And I'm glad that you ported it to Go!
As I mentioned in another thread installing Magic Wormhole is _easy_, just "brew install magic-wormhole" or "sudo apt-get install magic-wormhole" (or distro equivalent). Windows might be tricky but don't see how croc does it much better. I hadn't heard of Scoop but seems like you'd need to install _that_ first (which has its own dependencies). That said I haven't used Windows in many many years, maybe Scoop is the standard nowadays like Homebrew is for Mac? In which case I'd give a point to croc.
Also, with a Go app you could just share/distribute the binary, but then you also need to know where to put it. And you could do that with Magic Wormhole as well.
Or a step better like using something like PyInstaller (wonder why author hasn't done that?)..
I agree installation for magic wormhole is easy for Mac/Linux, but I think its not easy on Windows. Windows is very common (77% of the market share for desktops ) so I wanted to keep croc easy to install for Windows.
In fact, if you are just receiving a file on Windows, all you have to do is download the binary from releases , unzip it, and double click on the binary - no terminal experience required. (Sending a file does require using a terminal, but all CLI apps have that hurdle...).
In any case, I encourage everyone to setup their own relay. Its easy. Make sure ports 9009-9014 are open and then run `croc relay`.
Connecting to your relay is as easy as `croc --relay ADDRESS:9009 send X`. And if you add the `--remember` flag you only have to tell it which relay to use once.
Just curious: pun intended?
I use MWH often for sending stuff like URL's, passwords, API keys... having to create a file for that is quite annoying (and remembering to delete it afterwards).
"brew install magic-wormhole" or "sudo apt-get install magic-wormhole" is easy enough and I don't hang around with people who use Windows. ;)
Resumable file transfers sound like a great feature though. Not sure it's implementable with MWH the way it now works.
Tab completion is something I'll miss from MWH (I'm already converted!) but I can live with it, and my non-technical friends aren't even aware of the concept.
Edit: magic wormhole is awesome! Thanks! :D
Provided of course that you have a wifi card that supports active monitor mode.. and you are root on your machine.
It _imitates_ the AirDrop UI, but is not compatible.
Use case is you open up snapdrop.net on both devices (nearby? same wifi?) and you can send files between, but I believe it all goes through a central server. Great for sending a few photos from phone to laptop or linux device when AirDrop isn't an option.
$ nc -l 4242 > dest
$ nc hostname 4242 < file_to_send
Fastest way to transfer a collection of files on a local network and doesn't require temporary storage for the archive
(...And my axe!)
I usually use ssh (e.g. ssh -L 3333:localhost:3333) and avoid gpg, though I may use lzop depending on link and CPU speed.
tar cf - files morefiles | ssh user@dest "tar xf -"
tar cf - files morefiles | zstd | ssh user@dest "zstd -d | tar xf -"
Not that extra checksums are a bad idea...
However no-one mentioned a super simple service: https://wetransfer.com/ - Simple as drag & drop, enter recipient address, SEND. Pretty simple if you want non techie to send you something.
A dead simple UI and a link is nicely emailed to the receiver.
It's like a private dropbox.
For files on the order of <= 10GB, magic wormhole is lovely: https://techcrunch.com/2017/06/27/magic-wormhole-is-a-clever...
`sudo pip3 install magic-wormhole` is an automatic "must run this command" on all my servers. Simple, secure way of sending files from A to B, with no bullshit. No account creation, even.
Here's a couple of my projects:
curl https://patchbay.pub/random-channel/filename.bin --data-binary @filename.bin
curl -O https://patchbay.pub/random-channel/filename.bin
This one works in the browser. You select a file and it gives you a link you can share with others. The underlying tech is more complicated (relies on WebSockets) than patchbay, but the server is currently more reliable. I'm in the process of improving the patchbay server to be more robust, and I think it will eventually replace fibridge completely.
My current project is building something along the lines of Google Drive, but much simpler and designed to be self-hosted from within your home.
I gave that question a few tries but I feel part of the problem is the market being saturated by giants doing half the job. Another part of it is the lack of interoperability regarding file sharing between operating systems.
I mean - a native right click>send to>user - native notification to the user with a solution to direct p2p download from that person. No software needed, seemless integration. Why is that so hard?
I really wish the big boys would give that a try rather than giving us toy-like features and price bump.
You see, we have protocols like HTTP and the web relies on it, everyone agrees this is how the web works and all OSes have one or several browser able to access it.
Something similar works with emails.
Yet, we don't have anything to share files between computers, imagine something like bittorrent directly integrated to all OSes, letting you right click > send to.
For my 12TiB of data I use Syncthing when I need to sync them more often, rsync.
I used rsync several times for billions of smaller files totalling to 300GiB, but really all depending on how I connect nodes. I prefer syncthing, but when only ssh is available, then rsync is good too.
Currently largest synced directory by syncthing (that shares usage stats) is over 61384 GiB :)) https://data.syncthing.net/
If it's for more than 1 person I upload it to a VPS if it's small (<20 GB) or make a torrent if it's not.
$ python -m http.server 8000
and then you can start ngrok to expose the file
$ ngrok http 8000
that will give you an URL to share with whoever wants it.
ngrok is an amazing utility though!
For local network:
I use miniserve ( https://github.com/svenstaro/miniserve ) which is just a simple http server.
There are similar tools for when I want to share it from the smartphone.
Through the internet it really varies:
Sometimes it is Firefox send ( https://send.firefox.com/ )
For photos, I use a self hosted photo solution piwigo ( https://github.com/Piwigo/Piwigo )
In earlier days it has been a self hosted nextcloud ( https://github.com/nextcloud/server ) instance. I still use it when the files are getting too large for Firefox send.
I also tried the already mentioned wormhole but this works only with tec ppl.
"Firefox Send is temporarily unavailable while we work on product improvements.
We appreciate your patience while we make the Firefox Send experience better."
Or npm install -g http-server for nodejs,
Local network is easy.
npx http-server .
To remove the global install step. This will install and run all in one command.
gpg --symmetric file.dat
(enter a password)
gpg --decrypt file.dat.gpg > file.dat
(enter the password)
Others have said it better than I can. See e.g. https://latacora.micro.blog/2019/07/16/the-pgp-problem.html
With that said, Magic Wormhole is also a very good tool for transferring files. It will encrypt in transit. So for many files, using a separate encryption tool is not necessary. (So far I haven’t tried it for large files.)
With trackers, you're revealing the fact-of-transmission, transmission-size, & endpoints to any number of unknown remote parties. Potentially, attackers not even on the privileged network-path from origin to destination could tee off a copy of your encrypted data for offline analysis.
With Magic Wormhole's rendezvous-server, only one server, run by the same person whose code you're trusting (& can audit), briefly relays encrypted control-messages. (It might even be limited in its ability to deduce the size of the transfer – I'm not sure.) And if that's still too much, you can run your own rendezvous server.
It seems to me the amount of information leaked in the BT Tracker approach is strictly (& perhaps massively) more, to more entities, than that leaked in using the Wormhole author's server.
Of course the nice thing about Magic Wormhole is that its security does not depend on server components to be trustworthy. Use the default servers, use your own, use a different third party server, it doesn't matter, your data is still secure.
Edit: If you are worried about privacy magic-wormhole supports transit over tor.
The author really wants to dislike PGP, but the reason everyone trusts PGP is because it's been around forever. Yeah there've been deficiencies, just like there've been deficiencies in OpenSSL, but that doesn't make it a bad tool. I could go on but this xkcd sums it up: https://xkcd.com/2347/
>Absurd Complexity / Swiss Army Knife Design
Git is complex, yet effectively every project ever uses it. The reason is you're fine to avoid the edge-cases and just focus on the main functionality, but that one time you need to do something ridiculously hacky, there's a tool to do it, instead of having to roll your own solution.
Would you rather your software not have backwards compat? GPG has sane defaults, and everyone you talk to using modern versions will be secure by default. Not sure what the author is going on about with weak default password encryption:
$ gpg -vv --symmetric test.txt
gpg: using cipher AES256
gpg: writing to 'test.txt.gpg'
I don't really find GPG any harder to use than Git. http://irtfweb.ifa.hawaii.edu/~lockhart/gpg/
By default, GPG keys expire in a year.
I've never heard of any of this. Sign and encrypt, by default you get AES256 encryption and a SHA512 digest:
$ gpg --sign --encrypt test.txt
$ gpg -vv -o /dev/null --decrypt test.txt.gpg
gpg: encrypted with 3072-bit RSA key, ID 74588E74DDD483BC, created 2020-09-02
gpg: AES256 encrypted data
gpg: binary signature, digest algorithm SHA512, key algorithm rsa3072
Have an identity. Have other people verify it. Trust based on that. It's the same way the PKI works, you know, that thing that runs the entire internet. Except you don't need to trust CAs anymore.
He's not wrong about this one, normally you can see who's ID a message is encrypted for. If you're trying to be sneaky just use symmetric encryption I guess, it feels like a different use case.
>No Forward Secrecy
Definitely a different use case. There's no case where I want to decrypt a packet from the middle of a TLS conversation a few years later. But an encrypted attachment in an old email?
How are GPG keys harder to handle than SSH keys? Both are just blocks of base64 (gpg --export-secret-keys -a)... one is 80 lines while the other is 50, but does it really matter?
Same argument as Backwards Compatibility, I think.
The page he linked has 27 CVEs. Over the last 15 years. For comparison, OpenSSL has over 200.
This speaks to the article's complaint that GPG is usually the wrong tool for the job. For example if you just need to transfer a file securely (and have a fast, reliable internet connection on both ends and don't need to worry about active tracking of metadata), you can use Magic Wormhole (or a similar PAKE system) to do it. Imagine two scenarios: one with GPG and one with PAKE, and in both cases an adversary captures a ciphertext. With GPG if they can get your private key 6 months later, you're screwed. With PAKE the keys used to exchange the data are ephemeral, and so this isn't even a possibility.
> > Broken Authentication
> I've never heard of any of this.
I believe this is referring to authenticated encryption (AEAD), which is definitely valuable and GPG does not provide. AGE does.
Most of the other stuff you mention also falls under "wrong tool for the job". If you want a better argument, I'd talk about GPG having a web of trust system built in. On the other hand I think it's an open question whether this has ever brought real value to anyone. We have enough other secure messaging systems that it's no longer necessary for a single program to get there on its own. Usually you can count on some other mechanism to confirm your contact's identity.
AFAIK this isn't really true. If the adversary captures the initial key exchange plus all the data (ie the full transaction), then later discovers your PSK, they'll be able to decrypt. The only case where this helps you is if they capture some packets out of the middle without the initial handshake.
It doesn't matter if your ciphertext is authenticated if it's both signed and encrypted, which is what I was getting at. In a normal TLS-like encrypted conversation, yes AEAD is very useful. But it's not applicable here.
Someone can correct me if I'm wrong, but I believe the idea behind a PAKE is that the password only authenticates the key exchange and doesn't contribute to it. So if you record all transmitted data you still need to break the key exchange which should have used a bunch of random bytes from both parties that are thrown away after use. The password is only there to prevent MITM, not to derive keys.
I believe magic wormhole uses SPAKE2, which has perfect forward secrecy. When using passwords to secure transmitted files it's really important to have forward secrecy otherwise you risk the transmission being recorded and the password being attacked offline which depending on your password strength might lead to trivially decrypting the data.
That's right. From memory, the passwords are just used to do a DH key exchange. The key is entirely ephemeral. Even if the entire ciphertext is captured, and even if the adversary then gets your password, they can't decrypt. To decrypt you'd have to MITM the key exchange, which would require knowing the password before the file is exchanged.
Just for posterity, the tool linked above that everyone is discussing here was written by the Go Lead Dev for cryptography and security at Google.
One obvious guess is contradicting Latacora. 'tptacek is well known here, his name alone give significant weight to anything he writes.
In any case, they have valid points. PGP was written at a time we didn't understand cryptography as well as we do now. We can do better. Have done better, if half of what I've heard about Age is true.
Absurd complexity: We can definitely do simpler than PGP, at no loss of functionality.
Swiss Army Knife design: I think I disagree with Latacora there. Doing many things doesn't mean you have to do them poorly. There's no material difference between having 3 programs, and having one programs with 3 options, at least on the command line. If PGP does anything poorly, it's for other reasons.
Mired In Backwards Compatibility: well that depends. It makes sense that PGP can decrypt old obsolete ciphers & formats. Ability to encrypt to those same old stuff wouldn't. For instance, PGP longer be able to generate RSA keys at all. Then, one year later, once all RSA keys have expired, new PGP versions should no longer be able to encrypt to RSA keys at all. (In an ideal world. More realistically, we should wait a couple more years.) Only the ability to decrypt old messages should be kept until the end of times.
Obnoxious UX: I don't know enough to have an opinion.
Long term secrets: Sure they're bad, but I don't think we can avoid them. People need your public key to send you anything, so it can't be too short lived. My guess here is that Latacora is attacking the whole file encryption + web of trust thing, not PGP in particular.
Broken Authentication: If attackers can trick PGP decoders into decrypting forged messages, that's fairly critical, and should be fixed even if it breaks backwards compatibility (we could have an optional `-legacy` flag or something to compensate). Now if you go sign and encrypt… well there are two possibilities: if you sign then encrypt, you run into the cryptographic doom principle: the decoder will decrypt then verify, which creates the temptation to process unauthenticated data. Many vulnerabilities have been caused by such errors. If you encrypt then sign, you reveal to the entire world that you signed this particular ciphertext. Not the kind of data most people would like to leak. In my opinion what we really want is authenticated key exchange followed by AEAD. With the Noise X pattern, you'd even hide your identity from snoopers.
Incoherent Identity: Okay, they're clearly attacking the very notion of web of trust, not PGP specifically. They say it doesn't work, but I'd like to know why. First, I'm not sure I want to take their word for it, and second, the causes might be fixable.
Leaks Metadata: that one is clearly avoidable. Noise X for instance uses an ephemeral key to encrypt the transmitted public keys, and the recipient's key is implicit. Can't know who the message is for (nor from) without the recipient's private key.
No Forward Secrecy: Different use case indeed. Again, Latacora is attacking the very notion of file encryption, not PGP specifically.
Clumsy Keys: I'm with Latacora on this one. The 50 lines SSH keys are clearly RSA based, and as such obsolete. Modern keys use elliptic curves, and those take one line, which is more easily copy & pasted in various contexts. Arguably a detail, though.
Negotiation: It's more than just backwards compatibility. Backwards compatibility can be achieved with a simple version number. If instead we have a range of algorithms to chose from, things get more complicated. Now, you can't avoid the need for different kinds of encryption: you can send against a public key, or you can encrypt with a password. Possibly both. Beyond that however it's simpler to have a version number that hints at a single public key encryption and a single password based encryption.
Janky Code: Can't judge for myself. I can guess however that much of it is caused by the (ever evolving) PGP specifications themselves. Probably more a consequence of all the other issues than a separate problem. Still, I think we can do much better. I mean, I've written an entire crypto library, and people give me hell for a single vulnerability in over 3 years. 27 CVE in comparison would be worth burning in Crypto Hell for a long time.
There are many tools which are way older than gpg, but still in the wide use. For example, I have not heard any arguments that people should stop using things like "rsync" or "curl".
But if you like GPG then feel free to continue using it.
Using Magic Wormhole would seem to be far more secure than gpg and bittorrent since it has less risk of third party interception and uses SPAKE2 which has perfect forward secrecy.
If I had to use bittorrent to transfer a sensitive file I would generating a random key and encrypt, then share the key over a different channel.
I've never needed to send TBs of data.
I had to keep Netscape open because it showed the status of the file download. People asked why my web browser was open all day, it was for the giant download that is part of my job.
Windows now comes with real sshd - Mac has it, and linux/bsd of course has it.
For small files (less than 10gb?) generally scp - but trying to get in the habit of using rsync. Generally scp isn't a very good idea.
For larger filesystems zfs send over ssh.
For streaming: dlna/upnp over zerotier vpn.
Shame Firefox send service imploded - for smaller files and where ssh isn't an option - it was a nice service. But a little too much hassle to self-host.
Didn't realize there was another way.
We use it for copying hundreds of TiB's regularly, occasionally over a PiB.
For the thing you did, these could have worked.
For ssh, there's https://www.psc.edu/index.php/hpn-ssh for improving wide-area transfer speeds, and something else, I think.
Otherwise I liked Firefox Send while it was running since I mostly trust them to not be full of intrusive nonsense.
When I want to make something large available to someone else, I post it on my local webserver. It points to an NAS on my LAN; anyone in the house can just drop whatever on it and hand out a link.
I've never used it myself but I've heard but it's actually quite impressive but the business model makes it inaccessible to individuals
Apparently they do some sort of complicated Trace routing and open a ton of simultaneous routes. Probably also uses something other than TCP 4 correction.
Seems like a good place for some disruption
- iCloud drive for personal stuff I need to share between devices. I trust this to sync anything up to a TB reliably.
- Google Drive when I need to share to someone else.
- Occasionally external drives when I need to move data fast locally.
- Some combination of S3 / AWS CLI / EC2 when things go beyond personal computer capacity depending on where the data is coming from and going to.
For bigger things, there are two basic paths:
If it's to someone that's not primary a tech person, then the data goes on a portable hard drive, and I either drive it to them, or mail it and walk them through accessing the data. In both cases I encrypt the data, generally with Veracrypt.
If it's to someone that won't mind using AWS's toolchain, and has decent connectivity, I'll use Snowcone or Snowball to get the data to S3 and give them S3 URLs.
I tend to get the data to S3 sooner or later so I can migrate it to Glacier. More than once I've had to recover GBs or TBs of data because the customer lost it, so I'm prepared now.
Hightail offers four different subscription levels for file sharing, with varying restrictions on file upload limits and storage. With our free Lite plan, you can upload files up to 100MB each, and have a storage limit of 2GB. Pro plans, which start at $15/month, allow you to upload individual files up to 25GB each. Teams plans, which start at $30/user/month, allow you to upload individual files up to 50GB each. Business plans, which start at $45/user/month, allow you to upload individual files up to 500GB each. Pro, Teams and Business plans come with unlimited storage.
pipx install magic-wormhole
wormhole send /some/big/stupid/thing.tar.gz
For files I need more control over that are less than say 5gb, I tend to scp them to a web server I control, so that I can delete them afterwards.
For files larger than that, I'll use a private Bittorrent file. It's very rare I need to transfer files this large, but I really like this solution.
That's good for the 5G +/- file transfer.
What about using "Firefox Send"? (I never used it so far)
I read that the limit is 1-2.5GBs => maybe you could break down the file and upload it in multiple pieces... .
EDIT: oopps, Firefox Send doesn seem to be available anymore - https://support.mozilla.org/en-US/kb/what-happened-firefox-s...
You should go to another cloud computing provider if you don't like Google Drive.
My third alternative is even more easy, but it's a little sluggish because it cuts you at two gigs either way. It is a service called WeTransfer, which was almost your only choice 15 years ago if you had to submit a huge file.
But for something like 10 TB or more, I'd see a torrent as the only way. My uploads speeds are too slow for anything else, the connection would be reset. The torrent also helps prevent corruption.
Snark aside, Croc does look a lot like magic wormhole. I'll definitely have a closer look at it.
When I'm the receiver, I'm using a self-written CLI tool "send me a file" (smaf): https://github.com/nikeee/send-me-a-file
It basically opens up an http server on my machine and let's a user upload a file via curl or a web interface. It also computes a sha256 hash while receiving.
These methods only work on a local network. I use my VPS and sftp/http for transfers descending the local network.
Then I copy-paste that into a self-Slack and `wget` it on the other side.
It's almost equivalent ergonomically to a `python3 -m http.server` and because it's the same whether I'm giving to someone else or to myself, I spend less thinking!
The only person I'm sharing with lives near me, so sneakernet is the most convenient.
I once hit a transfer speed of 30TBs/hour carrying a box of hard drives home from work.
– Andrew Tanenbaum, 1981
Now I live in the heart of Silicon Valley, a couple km from Google's Headquarters, and have a crappy 15 mbps uplink with no other options available to me, so I typically throw the file on Amazon S3, Google Drive, or Dropbox before sending a link to the other person so that they don't have to put up with a 15 mbps download.
Too bad the laptop I am transferring from in a bad condition and with Windows so sometimes iCloud service is freezing and I need to restart it.
I tried to create torrent but it seems it takes forever on such a weak machine.