Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What is your favorite method of sending large files?
319 points by mettamage 21 days ago | hide | past | favorite | 354 comments
I just opened up a simple HTTP server to send someone a large file. Then I figured, I never gave this question proper thought.

But some of you have, and I figured they make for fun and interesting stories ;-)

So what's your favorite method to send large files, of at least 5GB or bigger? Though, I'm also curious on how you'd send 10TB or more.




Bittorrent. No, really. Lots of nice behaviors when transferring large amounts of data between arbitrary endpoints.

Transferring the torrent metadata is pretty trivial and can be done via a wide range of methods, and having that flexibility can be nice.

Unlike HTTP, you get reasonable retry behavior on network hiccups. Also, more robust data integrity guarantees, though a manual hash test is probably a good idea either way.

At least among people I'm throwing TBs of data around with, torrent infra is common and it's nice to not have to deal with some special-purpose tool that, in practice, is probably a pain in the ass to get compatible versions deployed across a range of OSes. Basically every platform known to man can run a torrent client of some sort.

And obviously, no dependency on an intermediary. This is good if you're trying to avoid Google et al. That does, however, bring a potential con: if my side is fast and your side is slow, I'm seeing until you're done. If I'm uploading something to gdrive or whatever, I can disconnect one r the upload is done. If you control an intermediary like a seedbox, that's less of a problem.

In general, though, torrents are pretty great for this sort of thing. Just encrypt your stuff beforehand.


Just be careful - unless you mark that torrent private it'll get posted to the DHT and crawlers like BtDig will pick it up and list it publicly.

For this reason I prefer using something like Syncthing which is designed more with this purpose in mind.


I am working on a tool that will help with this. It is built with AlpineJS, Nim, Aria2 and Webview resulting in a 5MB download which doesn't include the torrent file as that varies. The idea is that it is a one click solution as the torrent is embedded with the binary.

The inspiration for this tool is for assist with LAN parties. The one thing I have against the private flag is that it also disables LAN peer discovery, which would be okay if you use a tracker behind the LAN (though the torrent would need to be modified if the hostname/IP changes). Since you can't configure other people's clients I found it simpler to use Aria2 that is preconfigured to disable PEX and DHT.

Combine this with Metalink or Web Seeds, so that you can have a initial seeder based on HTTP. I think using IPFS would be a great web seed as long as the their gateway and Cloudfare's continues to stay up. IPFS creates a permanent URL if you will, so no need to worry about dynamic IPs or domains. It would be great if IPFS would have a smaller binary (right now it is around 20MB compressed) and had a way to get a file and "seed" it like a torrent. But for now I think torrent is mature enough.


Why not use Syncthing or Resilio Sync for LAN parties?


I have considered it. The one thing about Syncthing is that there is no true readonly way to sync. For example, someone may accidentally extract an archive in the Syncthing folder and it will sync everywhere. If Syncthing gets this feature I think I would be 100% onboard with it for LAN parties.

Resilio I believe supports it but I would prefer an opensource alternative.

https://github.com/syncthing/syncthing/issues/62


You can use the send only feature in Syncthing to make your folder read only.

https://docs.syncthing.net/users/foldertypes.html#folder-sen...


Sure, I can have for example my computer be send only. But unless everyone else configures it properly as receive only, they can start pushing data to other clients. It would be far easier to have Syncthing create a readonly share similar to Librevault and Resilio.


You could also gpg encrypt the file first which is a little more work but bulletproof. Maybe someone should make a tool that creates a torrent and encrypts it for you and presents a password box on the other end.


Like magic warmhole?



Sort of like Limewire sharing your documents directory?

They have MuWire now: https://muwire.com/

Not private but public sharing.


That's why you encrypt it or at least make it a password protected archive. That way no one will see the file names in the content list. And without seeders, it's highly unlikely anyone will download it. Torrents work well with 1 seeder, 1 downloader.


Technically speaking, what occurs when you mark a torrent private?

Does it actually block trackers from picking it up or just like tell them not to if they want to obey it?


I'm not an expert on this, but as far as I'm aware, the main difference is that your client won't go out and announce the infohash to the DHT (used for trackerless torrents, magnet links, that kind of thing) and won't provide it to other clients for download. It'll only ever send requests relating to it to the trackers specifically listed within the torrent itself. This means that anyone who wants to download it basically needs to posses the torrent file. Trackers don't really crawl in this way, however, some searching sites like the one I mentioned there do, they'll go out and look at the entire DHT, trying to find entries that they don't know yet and when they find one - go download the torrent and post it to their index.


For some reason I haven’t understood, syncthing would often miss transferring files but mark the entire transfer/sync as complete. No errors in the logs either. I tried it for a few months earlier this year, then gave up and switched to plain rsync in a shell. Based on that sample of one personal experience, I wouldn’t recommend syncthing to anyone.

Note: I was using syncthing for a one way sync all the time, with send-only and receive-only settings on the endpoints.


I've been using Syncthing extensively for years and never observed this behavior - it always either completes the transfer or fails to complete the transfer (which is obvious in the UI). A silent failure is certainly not typical.

This sounds like a pretty severe bug, so if you still have the systems you were able to produce the behavior on it might be helpful to reproduce it and submit an issue.


Yeah, I gave up an Syncthing when using it between my Android device and Linux server. Too many hours were lost trying to debug why the server no longer showed as connected on the phone. At some point I went back to emailing myself files, because that actually worked, and I realized that I didn't actually need an entire program with a synchronization algorithm to send a few large files on the occasion.


How are BtDig crawling the DHT? I thought this was made impossible or at least more difficult a while ago.


I don't see how it was made impossible. It's definitely possible, you can do it yourself there's many examples on github (even btdig has an old version on it)


Its trivial to disable PEX and DHT in any reasonable client


Sometimes I wonder if the bad connotation due to piracy made this protocol not standard (in a sense, that's not widely adopted, and people need dedicated clients, instead of it working for example in browsers).

I don't know enough, maybe it has security issues, or some limits... but the fact is that it's still widely used, by tech literate people, so it still stands the test of time and it must be doing something right.


> people need dedicated clients, instead of it working for example in browsers

"Version 9.0 [...], released on June 20, 2006" "Opera introduced [...] a built-in BitTorrent client" [1]

That's history now, of course, courtesy of today's browser monoculture.

[1] https://en.wikipedia.org/wiki/History_of_the_Opera_web_brows...


Does Firefox have an extension to download torernts?

I know it used to have one, but that now that I'm aware of the v57+ extension issue, I notice that the one linked to first in a search seems to be dead.


Brave has a built-in torrent downloader.


Probably, but it is quite a neat technology. As people in this industry, we know what else it is being used for, and it serves those purposes pretty well. I love torrents. I love the technologies that it uses. Many other projects use it, for example take a look at https://docs.ipfs.io/concepts/dht and https://github.com/savoirfairelinux/opendht just for DHT alone. There is also https://en.wikipedia.org/wiki/Magnet_URI_scheme. There are many more, just too lazy to find them at the moment, but believe me, the things you will encounter once you go down this road, is truly amazing. Worth it!


> the bad connotation due to piracy made this protocol not standard

Back when Resilio Sync was known as Bittorrent Sync, I proposed it as a file-sharing solution between multiple locations for the company I was working for. Immediately returned with an emphatic refusal to use any kind of software that had "Bittorrent" in the name, regardless of what it was for.

This is the same company where they picked up I had "Bittorrent Software" installed on my work laptop, and the people who ran the auditing software freaked out because, you know, it's Bittorrent! I told them it was a single icon in my PortableApps folder but they just kept saying "There's a Bittorrent client on your PC, you could be fired!" until I deleted the icon.

So yes, the fact it's associated with piracy has made a huge difference in some places.


I was kicked out of the public library once for running Syncthing, I guess because the firewall detected it as some sort of piracy tool


WebTorrent [1] can be used to run BitTorrent on the web. It's actually integrated into the Brave browser which is a nice feature. Other browsers might catch on soon.

[1] https://webtorrent.io/


Caution: Don't open this page on a metered connection, immediately starts downloading a 130MB movie file.


Ouch, I understand that it demoes their solution, but with the autoplay that's very insensitive.


WebTorrent is pretty limited though:

> ... a browser-based WebTorrent client or "web peer" can only connect to other clients that support WebTorrent/WebRTC.

(from https://webtorrent.io/faq)

This essentially splinters the world into the bittorrent and webtorrent halves, which makes it far less useful.


WebTorrent is part of libtorrent now, so should be part of all torrent clients based on it, e.g. qBittorrent:

https://github.com/arvidn/libtorrent/pull/4123


Not quite there yet. Latest release of libtorrent was 1.2.9, seems it's scheduled for "some branch" after 2.0 (https://github.com/arvidn/libtorrent/pull/4123#issuecomment-...). Add in some time for clients to actually start using it as well, which is not guaranteed either.

The DHT remains split for now.


If you want a CLI/Desktop WebTorrent client https://github.com/anacrolix/torrent works great.


For some reason browser vendors do not allow JavaScript to connect to any port. So there's no other way.


It was probably phones and datacaps that really did it. Torrents are really only ok on laptops/desktops and the unexpected data usage for customers is much worse than the costs of distributing data.

Steam could probably benefit loads from torrents but they do have extremely good distribution servers already.


In my previous job we used BitTorrent to distribute movies to cinemas. :-)


What an irony! :)


BitTorrent didn't catch on simply because there aren't very many actual use cases for it. The majority of P2P content distribution will always be illegal (due to the nature of copyright), and so the client-server model is much more representative of the real world. It is technologically pretty neat though, and some publishers do actually use the protocol for things like software updates.


Not very many use cases??

Except for: - updates of software. - distribution of public video files. - distribution of install images. - distribution of raw public measurements (ea science data).

Basically BitTorrent is the protocol for any usecase with 1:n (or n:n) with public data (ea. No data access controle is needed).

Copyright has nothing to do with it and the nature of BitTorrent does make it also a good fit for some pirates.

Never forget that the following is shared over BitTorrent: - World of Warcraft updates. - Linux distribution install images. - haveibeenpowned sha lists.

And that’s just what I personally know. I know there is more.


> Unlike HTTP, you get reasonable retry behavior on network hiccups

Seriously. Zmodem/resume was doing better in 1986.


wget -c, it ain't hard.


You need to notice the connection died and do that manually, or think ahead and have your script do it in a loop. A bt client already has the reconnect logic. And it will hash the contents to make sure it worked correctly.


The problem isn't the clients.

The HTTP server needs to provide optional support for range requests on that resource. (RFC 7233) Not all do.


Every major webserver supports resume.


I've written a cross platform gui tool to transfer arbitrary large files via p2p with end-to-end encryption.

https://cryonet.io

- Fully p2p, even the signalling server part which is done via Tor hidden services (later on there will be an alternative using a DHT).

- So unless the Internet or Tor goes offline it can never be down.

- Works with proxies and in corporate environments when UDP might be blocked.


I had to transfer 2tb of data over a wireless connection that kept dropping packets. BitTorrent was the only method I found that worked; everything else either broke down when it disconnected, or had too low of a retry limit to be useful, or wouldn't resume properly.


Can you recommend easy software to do the creation? Don't you need a tracker as well?


qBittorrent has a built in tracker for private torrents https://www.ghacks.net/2019/09/20/how-to-create-a-private-to... this method will also not leak information into the DHT though 99% of the time you don't care about that.


Almost all BT clients can create torrents. You can use a public tracker if you want, or you can use a magnet link (DHT magic will find the peer eventually), or worst-case scenario you can manually add the peer IP on either end.


With transmission-create you can create a torrent from the commandline. Load it into your bittorrent client, find its hash in the UI, and make it into a magnet link, like:

magnet:?xt=urn:btih:[this is where the hash goes]

Example:

magnet:?xt=urn:btih:a008663c4fc7e710bd59200e5ef99ba04f83f319


There's tons of available trackers. Just use whatever one you want. Also most clients can create torrents.


transmission is pretty complete and easy to use


+1


BitTorrent is the best for sending large files. I wish there existed a tool to make torrent creation easier, and I also wish my computer could seed. Every time I create a torrent, nobody can discover my server, even though the port is open and even if I add the server IP as a peer directly onto the client.

I have no trouble seeding in swarms, only when I create torrents. Other than that, BT is ideal.


Maybe you're only able to make outbound connections and not able to accept incoming connections? Try port scanning your PC from the internet


No, connecting externally is fine, that's what's so weird about it.


>Just encrypt your stuff beforehand.

This is such a facepalm moment. "This service is awesome, it does everything you need, it's reliable, it's safe, it's practical, and all you have to do is encrypt your files, share encryption keys, validate identities, and establish trust with the person you're sharing with. Also if you screw it up in any way, your file is public forever. It's so rad!"


No, "here's a magnet link, and here's the matching 50-char password; don't share the password with anyone you don't want have file". Just because you can make security more complicated doesn't mean you're obligated to.


> Also if you screw it up in any way, your file is public forever.

You can literally say that for ANYTHING out there. Not sure what point you are trying to make. Let me guess, "Encryption is hard?". Well there's no free lunch.


Well there was free lunch when Mozilla's Send thing was running. Sharing the password in the URL hash was brilliant. But then it ran into all the same Mega.nz problems where people immediately use it to host malware.

Encryption is hard, which is why it's sad to see that when someone gets it right and nails the UX, it still gets ruined by the real world somehow.


5GB on my local network: Windows File Share, Rsync, or HTTP depending on source/destination

5GB on Internet: Upload to my OVH VPS and HTTP or Rsync it to its destination

10TB, local or Internet: Physical Hard Drive.

Never underestimate the bandwidth of a station wagon full of backup tapes!

https://www.tidbitsfortechs.com/2013/09/never-underestimate-...

And since you now buy 1TB micro SD cards, so perhaps I'd split the file 11 ways (no way it'll fit exactly) and send them via carrier pigeon. Or heck, I could just tape them to a drone and hope they aren't damaged in the crash. There's lots of ways to move data around. Maybe you want to UUENCODE it and submit bad URL's to a servers log so that it can be exfiltrated later? It would probably take a very, very long time, but could be done. I call it "CURLyTP"

https://miscdotgeek.com/curlytp-every-web-server-is-a-dead-d...


10TB, local or Internet: Physical Hard Drive.

Never underestimate the bandwidth of a station wagon full of backup tapes!

In Poland there is an option to send package via train for 8$. Just go to station and give package to conductor/train guard, few hours later receiver can get it at destination station. Sending HDDs full of raw video footage this way is very popular among video editors here.


In the US, you can ship using Greyhound bus lines. I wouldn't recommend it for this scenario, but I know people that ship auto parts around this way. Ship up to 100lbs for some incredibly cheap rates.

http://www.shipgreyhound.com/


This looks incredibly expensive and not competitive at all.

Over $20 for a 1 lb package, over $60 for a 50 lb package from Oakland to Los Angeles and a quoted time of four days station to station.

I can overnight a 50 lb package for $35, door to door, over that distance with the appropriate FedEx account and discounts. The ground service would cost me $30 without any discounts...


In my (limited) experience, the Greyhound shipping service becomes competitive when shipments are very bulky (like body parts) more so than when they are heavy.


You mean auto body parts, right?


Well, other body parts probably also fit the bulky description. But they'd also be heavy, I guess.


It may be competing with freight, instead of fedex. Auto parts don't always fit in a Priority (Small) box.


Many of the airlines in the us allow this as well - but more expensive - you can basically send something on a specific flight for like ~$100

https://www.unitedcargo.com/shipping/productsAndShipping.jsp...


This reminded me of the mail coaches in express trains of India. Some train routes still have a mail qualifier in their name. Mail by rail.

https://en.m.wikipedia.org/wiki/Express_trains_in_India


That’s a really cool idea actually


Same thing happens in Greece with buses. It's a very popular way for moms to send home-made food to their single/university student sons.


I remember watching a Top Gear special in India which showed the delivery system for hot food for workers from their wives - the food was placed in metal tins, loaded into pallets, driven to trains, offloaded from trains, and driven to the workers. The reliability of the system was surprisingly high IIRC


Dabbawalla is the name you’re looking for, and that’s more local than some of the other things suggested in the thread


In Croatia as well :)


In Laos the (mini)buses were the most reliable way to ship anything. We had 3 motorbikes strapped to the top and a sat cattycorner to a caged chicken on my trek through the north in a 15-seater minibus.


Balkan brothers!


Amtrak does it too


Oh that's fantastic - do you know how many packages they take? How does the receiver know when their package is coming? (other than the train timetable)


do you know how many packages they take?

As many as can safely fit into conductor's compartment. They can refuse to take package if it would get train delayed - so better have exact amount of cash because card readers on trains are extremely slow.

How does the receiver know when their package is coming? (other than the train timetable)

Train timetable only. For most trains delay information is published on operator's website. Delays are rare, except holiday season.

If receiver misses their package it goes to closest warehouse, then receiver is called and can request delivery by another train, but needs to pay another 4$ on pick-up (plus fee for every day package spends in warehouse).


OVH is great if it’s not business critical! As the semi-official safe haven of piracy in Europe, OVH’s network has uhh...seen better days.


Since credit cards go through the mail, I assume you can send SDs in an envelope for a few cents, right? Fast and cheap. Although this anecdote says USPS radiation scanning might wipe it, anyone have experience? https://www.thewanderinglensman.com/2014/01/warning-dont-mai...


> USPS radiation scanning might wipe it

I'm skeptical of this, since when I order microSD cards online they arrive via USPS and have a perfectly valid partition table and FAT filesystem on them.


I've sent flash media through mail without problems. As well as hard drives with disks. Never had an issue.


I would assume that mail scanning is similar to the x-ray scanner at the airport, and any SD card or hard drive that cant handle that is useless.

Could you imagine having your laptop and camera wiped every time you jumped on a plane.


Film is mailed all of the time without issue. Film would be ruined by the luggage scanning (i.e. higher power) machines though most film is usually OK with the lower power carry-on scanning machines.


TSA opened my checked luggage to manually inspect the box of 9-track tapes.


> Never underestimate the bandwidth of a station wagon full of backup tapes!

The exact quote [0] by Andrew Tanenbaum is: "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.".

Obligatory xkcd [1].

I met Andy Tanenbaum in 2008 at Vrije Universiteit in Amsterdam. I was the Technology Evangelist for AWS in Europe, giving a seminar there on cloud computing.

I loved him. Funny, smart. I didn't even know he was American until I met him in person (having studied computer science, you're not exactly sure of someone's nationality solely based on his last name).

[0]: https://en.wikiquote.org/wiki/Andrew_S._Tanenbaum

[1]: https://what-if.xkcd.com/31/


> I didn't even know he was American until I met him in person

Nothing wrong with that. Some my best friends are American


Some 2010 vintage calculations, when 16GB microSD cards were considered "big", estimates 275PB would fit in a 1985 Volvo 240:

http://dansdata.com/gz105.htm

A 2020 update for that would probably use 256 or 512GB cards resulting in around 4.4 or 8.8EB (And probably twice that again if the cargo capacity of a typical 2020 SUV equivalent of a 1985 Volvo is assumed...)


Pedantically, the overall bandwidth in that scenario is exactly the same as the recording bandwidth of one backup tape. Since bandwidth doesn't scale.

The data transfer rate, however, scales linearly with cardinality...


You wouldn't want to keep all of your proverbial eggs in one basket. Split the file into a multi-volume archive and create a par2/par3 parity volume set. Distribute parts of the parity set and parts of the data archive amongst m pigeons such that you only need n pigeons to arrive on time without having gotten lost, shot or eviscerated by hawks, and you can reliably rebuild the original data set with fault tolerance that also handles read/write errors on the MicroSD cards.


Would you share details of the design of your “ohs vps” facility?


Just thought about AWS Snowmobile.


If the data is below 50G in size and on my personal computer, then I just drop it into the Google Drive folder and it syncs to Google over night. When it is done, I export it and then send a link to the person(s). I pay $2 a month for the 100G account and I usually have about 50G disk space unused, so this is not an issue for me.

If it is above 50G and on my personal computer, I encrypt the data and then physically mail a USB stick with the data on it. Trying to arrange the downloading/uploading of 50G of data from my personal computer to another personal computer is a real pain. The people that I would send/receive that much data to/from, that is stored on my personal computer are usually people who don't know much about ftp, scp or easily sharing files over the Internet. Sending a USB stick is just so much easier and in many cases, faster. I make sure the recipients know how to decrypt the data before sending the data.

If it is on a server (For example, a side project instance in AWS), then I drop it into S3 bucket, export it and send the URL to the recipient. I just eat the cost of them downloading it. Usually I am making money off the work anyways, so it is the cost of doing business.


>If it is on a server (For example, a side project instance in AWS), then I drop it into S3 bucket, export it and send the URL to the recipient. I just eat the cost of them downloading it. Usually I am making money off the work anyways, so it is the cost of doing business.

Consider BunnyCDN way cheaper than s3.


If the project is already in AWS then it has to be downloaded outside the network at least once. I think s3 is one of the cheapest methods to server files on aws. There is no benefit to a secondary service.

BunnyCDN does look good though.


BunnyCDN is a CloudFront alternative, not an S3 alternative.


They also offer an object storage product, it seems: https://bunnycdn.com/solutions/cdn-cloud-storage


> If the data is below 50G in size and on my personal computer, then I just drop it into the Google Drive folder

I do the same thing with OneDrive. Of course that's the same thing and I'm just agreeing with your answer, there are any number of DropBox-like services (including DropBox!). OneDrive is nice if you have Office 365 because you can password protect a link and send the password via a separate channel, which makes the process pretty secure (if you trust Microsoft with your data).


Magic Wormhole (https://magic-wormhole.readthedocs.io/en/latest/) if it's between OSes.

Between OSX, AirDrop works very well. I have sent >10GB files between Macs, quite quick as well.

I have never send a 10TB file so I wouldn't know. None of my drives are that large yet :)


Magic Wormhole is good. These days I use croc,[0] which I find even better.

[0]: https://github.com/schollz/croc


I never understood why croc chose to copy the features of Magic Wormhole but implement them in an incompatible way.

Partially out of my annoyance with this I implemented a port of Magic Wormhole in Go that fully interoperates with the official python version: https://github.com/psanford/wormhole-william


I initiated the croc project. Others have asked me the same thing, so I'll bite:

I chose not to copy the features of magic wormhole because it seemed to me (at the time) to be very complicated and lacking in features crucial to myself (namely restarting transfers [1], sending folders without zipping them).

This has been to my benefit, I think, because I can iterate on croc quickly without having to conform to the wormhole spec and I've since added features still not available in wormhole (like restarting transfers [1], and ipv6 support [2]).

That being said, I really like magic wormhole and consider it my inspiration. And I'm glad that you ported it to Go!

[1]: https://github.com/warner/magic-wormhole/issues/88

[2]: https://github.com/warner/magic-wormhole/issues/354


Thank you. Croc is fantastic because it solves a problem that no other tool does. Magic Wormhole stumbles at the first hurdle (installability).


I'm sorry but I don't understand the hurdle?

As I mentioned in another thread installing Magic Wormhole is _easy_, just "brew install magic-wormhole" or "sudo apt-get install magic-wormhole" (or distro equivalent). Windows might be tricky but don't see how croc does it much better. I hadn't heard of Scoop but seems like you'd need to install _that_ first (which has its own dependencies). That said I haven't used Windows in many many years, maybe Scoop is the standard nowadays like Homebrew is for Mac? In which case I'd give a point to croc.

Also, with a Go app you could just share/distribute the binary, but then you also need to know where to put it. And you could do that with Magic Wormhole as well.

Or a step better like using something like PyInstaller (wonder why author hasn't done that?)..


As BusTrainBus says, "installability" is useful, and it was one of my motivations.

I agree installation for magic wormhole is easy for Mac/Linux, but I think its not easy on Windows. Windows is very common (77% of the market share for desktops [1]) so I wanted to keep croc easy to install for Windows.

In fact, if you are just receiving a file on Windows, all you have to do is download the binary from releases [2], unzip it, and double click on the binary - no terminal experience required. (Sending a file does require using a terminal, but all CLI apps have that hurdle...).

[1]: https://gs.statcounter.com/os-market-share/desktop/worldwide

[2]: https://github.com/schollz/croc/releases


Can you write what is that hadcoded relay in croc? Who maintains it, how does it handle the increase in demand or outages etc?


I maintain it. Its been up for about 2 years with pretty minimal downtime.

In any case, I encourage everyone to setup their own relay. Its easy. Make sure ports 9009-9014 are open and then run `croc relay`.

Connecting to your relay is as easy as `croc --relay ADDRESS:9009 send X`. And if you add the `--remember` flag you only have to tell it which relay to use once.


Would it be possible to sniff the protocol on the wire and provide some sort of cross-compatible magic wormcroc service?


Definitely possible. I'll happily take any PR that does this.


> so I'll bite

Just curious: pun intended?


I didn't see this one until now :)


I don't see an option to send text (browsed through README and Issues), is this not possible with croc? With Magic Wormhole it's the default to get a message prompt when running "wormhole send". Or you can do it as a oneliner with "wormhole send --text 'something'". Wormhole William can do this as well IIRC.

I use MWH often for sending stuff like URL's, passwords, API keys... having to create a file for that is quite annoying (and remembering to delete it afterwards).

"brew install magic-wormhole" or "sudo apt-get install magic-wormhole" is easy enough and I don't hang around with people who use Windows. ;)

Resumable file transfers sound like a great feature though. Not sure it's implementable with MWH the way it now works.


Just added it now. Send with croc send --text "hello, world" [1]

[1]: https://github.com/schollz/croc/releases/tag/v8.2.0


Nice! I just tried it out now. I also like the fact that "croc" defaults to receive, where with MWH you need to type "wormhole receive". Saves a lot of keystrokes.

Tab completion is something I'll miss from MWH (I'm already converted!) but I can live with it, and my non-technical friends aren't even aware of the concept.


Haha wow, thanks! I'll give it a spin today!


Haha, quick turnaround.


Not possible right now. This is a good idea, I can add it in :)


Oh this looks great! Thanks for this, checking it out.


I use the magic of AirDrop as well :) Unfortunately, sometimes I'm on my Linux machine (in which case python3 -m http.server works just fine), or need to send a file to someone else and then I'd like to have at least some form of monitoring/progress.

Edit: magic wormhole is awesome! Thanks! :D


OpenDrop works very well on linux for airdrop between linux+macos.

Provided of course that you have a wifi card that supports active monitor mode.. and you are root on your machine.

https://github.com/seemoo-lab/opendrop


There's a moderately useful one I've used quite a few times: https://snapdrop.net/

It _imitates_ the AirDrop UI, but is not compatible.

Use case is you open up snapdrop.net on both devices (nearby? same wifi?) and you can send files between, but I believe it all goes through a central server. Great for sending a few photos from phone to laptop or linux device when AirDrop isn't an option.


I wrote github.com/spieglt/flyingcarpet because I needed AirDrop on my Linux and Windows machines, pretty much does the same thing as running a Python server but over ad hoc wifi.


I once sent a huge file over airdrop and was shocked how much faster it was then sending through the local LAN.


some copy methods verify and check during the transfer operation, others do not, so the final speeds are very different between them


The major difference is that Airdrop establishes a direct WiFi connection, bypassing the router completely.


Just had a 41GB set of files sent to me in one transfer using magic-wormhole. Worked first try without a hitch. Pretty awesome from the receiving end.


This is local lan only right? Or are you port forwarding to expose it?


Netcat:

    $ nc -l 4242 > dest
And then on the sending end:

    $ nc hostname 4242 < file_to_send
This works great when you just need to get a file of any size from one machine to another and you’re both on the same network. Are used this a lot at one of my offices to schlep the files around between a few of the machines we had.


This is especially fun when combined with piping through tar, and adding pv in the mix for a transfer speed "progress" bar.

Fastest way to transfer a collection of files on a local network and doesn't require temporary storage for the archive


piping through tar is my "power move" when I want to steer a new recruit. I just do it casually in front of them... depending on whether they are awed or horrified by that, they'll get more hacky or more formal work.


You can also pipe through gpg if you want encryption along the way!


And through gzip/zstd/etc if you want compression!


And through tac if you want it in reverse!

(...And my axe!)


Note that gpg compresses by default. Which is thy it's so slow. You may wish to use lzop or zstd but also disable gpg's compression.

I usually use ssh (e.g. ssh -L 3333:localhost:3333) and avoid gpg, though I may use lzop depending on link and CPU speed.


that's what the -C option to ssh is for


lzop if you gotta go fast :)


zstd easily outruns a gigabit link on most somewhat-compressible data.


What's an example of the syntax of this? I remember seeing a graybeard do something like this to huck some files to another machine.


Paraphrased from a real example:

  tar cf - files morefiles | ssh user@dest "tar xf -"
If you want to insert, say, compression:

  tar cf - files morefiles | zstd | ssh user@dest "zstd -d | tar xf -"


Nice. This to me is a great testament of the power of the Unix philosophy: small orthogonal tools that operate on streams of text.


and if you are not worried about corruption. You might at least want to run a checksum on both ends and compare them.


Nah, unix nerds don't care about data corruption. That's the responsibility of some other layer, so by definition we're not in trouble if the whole thing blows up by our own negligence.

Checkmate, rsync.


Netcat is tcp is "reliable" and has checksums for each packet, as long as the length is right you can be reasonably assured that the file transferred correctly.

Not that extra checksums are a bad idea...


TCP indeed has checksums! But they are known not to offer ultimate protection. I had to deal with hosts corrupting TCP streams due to bad hardware before. Which was luckily detected by higher level checksums.


Is there a streamlike tool (a la sed, gzip, etc) that applies forward error correction? I've been looking for such a thing but can't find one that scratches the itch.


I've done that many times, often in combination with tar and pv as mentioned in another reply. But I prefer to do the listening on the sending side. Feels more like serving than shoving I guess. Also, I remember needing a -q flag on one of the ends for graceful shutdowns.


I remember using AIM (AOL (America Online) Instant Messenger) and other instant messaging applications for direct peer-to-peer file transfer back in 2002. On fast university connections, you could transfer movies in a few minutes. It's crazy to me that direct, fast file sharing was easier and more ubiquitous almost 20 years ago than it seems to be now. More context for my oft-misunderstood early feedback to dhouston about Dropbox.


Private Nextcloud. Or some free tier OneDrive/alternative is enought for me.

However no-one mentioned a super simple service: https://wetransfer.com/ - Simple as drag & drop, enter recipient address, SEND. Pretty simple if you want non techie to send you something.


I can recommend WeTransfer as well. Super simple interface, no sign-up required, free.


Wetransfer is brilliant. I’ve used their white label option in the past to transfer files to customers and allow them to send in files. It’s ace.


I work with geneticists and they all seem to use wetransfer for big data files.


+1 for WeTransfer.

A dead simple UI and a link is nicely emailed to the receiver.


Also emails the sender when the receiver downloads it.


Syncthing. https://syncthing.net/

It's like a private dropbox.

For files on the order of <= 10GB, magic wormhole is lovely: https://techcrunch.com/2017/06/27/magic-wormhole-is-a-clever...

`sudo pip3 install magic-wormhole` is an automatic "must run this command" on all my servers. Simple, secure way of sending files from A to B, with no bullshit. No account creation, even.


I've been working on solutions in this space for a couple years now. IMO making data on your local device available to others via HTTP range requests is the sweet spot between storing your data in the cloud and going full p2p.

Here's a couple of my projects:

https://patchbay.pub/

code[0]

Sender:

  curl https://patchbay.pub/random-channel/filename.bin --data-binary @filename.bin
Receiver:

  curl -O https://patchbay.pub/random-channel/filename.bin

https://fbrg.xyz

code[1]

This one works in the browser. You select a file and it gives you a link you can share with others. The underlying tech[2] is more complicated (relies on WebSockets) than patchbay, but the server is currently more reliable. I'm in the process of improving the patchbay server to be more robust, and I think it will eventually replace fibridge completely.

My current project is building something along the lines of Google Drive, but much simpler and designed to be self-hosted from within your home.

[0]: https://github.com/patchbay-pub/

[1]: https://github.com/anderspitman/fibridge-proxy-rs

[2]: https://github.com/omnistreams/omnistreams-spec


Sending a 10TB file on my internet connection would take 2.5 years of constant uploading. Shipping a hard drive is cheaper and quicker.


8x10,240,000Mbits /2.5/365/86400 = 1.03Mbs?


That's slightly faster than my Silicon Valley upload speed.


It doesn't have to be large files, file sharing sucks. Even copy/past across devices, even your own devices, is a pain. How many times did I email myself some links to share them across devices. And I'm not even speaking about sharing pictures with people not very accustomed to technology (in other words, don't try to get them install a cloud storage app).

I gave that question a few tries but I feel part of the problem is the market being saturated by giants doing half the job. Another part of it is the lack of interoperability regarding file sharing between operating systems. I mean - a native right click>send to>user - native notification to the user with a solution to direct p2p download from that person. No software needed, seemless integration. Why is that so hard?

I really wish the big boys would give that a try rather than giving us toy-like features and price bump.


You're gonna love magic-wormhole.


It's great, but we're very far from the os-agnostic seemless integrated files sharing.

You see, we have protocols like HTTP and the web relies on it, everyone agrees this is how the web works and all OSes have one or several browser able to access it. Something similar works with emails. Yet, we don't have anything to share files between computers, imagine something like bittorrent directly integrated to all OSes, letting you right click > send to.


Having a generic "Send to" would only solve a part of it though. There is currently no standard way of connecting the two ends of the same cable to two computers and expect them to exchange files without heavy configuration on both sides. Wireless goes the same way.


>I'm also curious on how you'd send 10TB or more.

For my 12TiB of data I use Syncthing when I need to sync them more often, rsync.

I used rsync several times for billions of smaller files totalling to 300GiB, but really all depending on how I connect nodes. I prefer syncthing, but when only ssh is available, then rsync is good too.

Currently largest synced directory by syncthing (that shares usage stats) is over 61384 GiB :)) https://data.syncthing.net/


Is there an iOS client for Syncthing?



I have run a internet reachable whatever.com webserver from my home desktop computer for 20 years. I just copy or softlink the file to a place in ~/www/ and give out the corresponding web link. I have a couple nginx locations prepared with pre-existing bandwith throttles so it's a matter of soft linking to the appropriate ~/www/dirwithassociatedrate/.

If it's for more than 1 person I upload it to a VPS if it's small (<20 GB) or make a torrent if it's not.


You can create a simple python http server to expose files in a directory to the local web:

$ python -m http.server 8000

and then you can start ngrok to expose the file

$ ngrok http 8000

that will give you an URL to share with whoever wants it.


Note that the built-in web server does not support resuming downloads, unless that changed recently. twistd works well and just as simple.

ngrok is an amazing utility though!


Does ngrok limit data at all or just connections?


I’ve sent a few GB over this method using their free service. Doesn’t seem to have a transfer limit

https://ngrok.com/pricing


It depends in whether I want to send it to someone in the local network or through the internet.

For local network:

I use miniserve ( https://github.com/svenstaro/miniserve ) which is just a simple http server. There are similar tools for when I want to share it from the smartphone.

Through the internet it really varies:

Sometimes it is Firefox send ( https://send.firefox.com/ )

For photos, I use a self hosted photo solution piwigo ( https://github.com/Piwigo/Piwigo )

In earlier days it has been a self hosted nextcloud ( https://github.com/nextcloud/server ) instance. I still use it when the files are getting too large for Firefox send.

I also tried the already mentioned wormhole but this works only with tec ppl.


Firefox Send doesn’t work anymore


That's unfortunate. That was my go-to method.

From https://send.firefox.com/:

"Firefox Send is temporarily unavailable while we work on product improvements.

We appreciate your patience while we make the Firefox Send experience better."


I wonder how temporary it is because it’s been saying that for a couple months already. I think that service is dead.


I was said a bit because it is a nice product developed by Mozilla.


for local network. You can just do python3 -m http.server in the dir, or python -m SimpleHTTPServer (python2)

Or npm install -g http-server for nodejs,

then http-server

Local network is easy.


If you're using a recent version of node/npm, you can just do:

npx http-server .

To remove the global install step. This will install and run all in one command.


Related: If you need to transfer sensitive data over Bittorent, Age is a good tool for encrypting it before transmission.

https://github.com/FiloSottile/age


Why a new tool? How is this better than gpg symmetric encryption, considering gpg is installed/available effectively everywhere?

Encrypt:

    gpg --symmetric file.dat
    (enter a password)
Decrypt:

    gpg --decrypt file.dat.gpg > file.dat
    (enter the password)


IMO, GPG is simply not a good tool and it should be replaced.

Others have said it better than I can. See e.g. https://latacora.micro.blog/2019/07/16/the-pgp-problem.html


That's said often about any tool which has been around long enough. People without experience come around and think they can replace an old tool with a better one, but it's usually only ignorance of either the complexity of the task, knowledge of using the tool properly, or both.


GPG is just not a very good tool. I think 'tptacek explained it quite well in the article I linked.

With that said, Magic Wormhole is also a very good tool for transferring files. It will encrypt in transit. So for many files, using a separate encryption tool is not necessary. (So far I haven’t tried it for large files.)


What no one seems to mention is that Magic Wormhole depends on the maintainer's server to negotiate transmission between clients. I don't like that requirement. Better to GPG encrypt and use bittorrent for a direct transfer. At least then we're using public trackers instead of some private server.


How would public BitTorrent tracker servers be better than a single rendezvous-server run by the tool's author?

With trackers, you're revealing the fact-of-transmission, transmission-size, & endpoints to any number of unknown remote parties. Potentially, attackers not even on the privileged network-path from origin to destination could tee off a copy of your encrypted data for offline analysis.

With Magic Wormhole's rendezvous-server, only one server, run by the same person whose code you're trusting (& can audit), briefly relays encrypted control-messages. (It might even be limited in its ability to deduce the size of the transfer – I'm not sure.) And if that's still too much, you can run your own rendezvous server.

It seems to me the amount of information leaked in the BT Tracker approach is strictly (& perhaps massively) more, to more entities, than that leaked in using the Wormhole author's server.


Its not a requirement that you use the default Wormole rendezvous or transit server. Its all open source you could run your own private servers if you wanted to (and its easy).

Of course the nice thing about Magic Wormhole is that its security does not depend on server components to be trustworthy. Use the default servers, use your own, use a different third party server, it doesn't matter, your data is still secure.

Edit: If you are worried about privacy magic-wormhole supports transit over tor.


What it more specifically that you’re concerned about with regards to the server?


I want to comment on that article you keep referring to, but I don't want to clutter up the top of the thread so I'll do it here.

The author really wants to dislike PGP, but the reason everyone trusts PGP is because it's been around forever. Yeah there've been deficiencies, just like there've been deficiencies in OpenSSL, but that doesn't make it a bad tool. I could go on but this xkcd sums it up: https://xkcd.com/2347/

>Absurd Complexity / Swiss Army Knife Design

Git is complex, yet effectively every project ever uses it. The reason is you're fine to avoid the edge-cases and just focus on the main functionality, but that one time you need to do something ridiculously hacky, there's a tool to do it, instead of having to roll your own solution.

>Backwards Compatibility

Would you rather your software not have backwards compat? GPG has sane defaults, and everyone you talk to using modern versions will be secure by default. Not sure what the author is going on about with weak default password encryption:

    $ gpg -vv --symmetric test.txt 
    ...
    gpg: using cipher AES256
    gpg: writing to 'test.txt.gpg'
>Obnoxious UX

I don't really find GPG any harder to use than Git. http://irtfweb.ifa.hawaii.edu/~lockhart/gpg/

>Long-Term Secrets

By default, GPG keys expire in a year.

>Broken Authentication

I've never heard of any of this. Sign and encrypt, by default you get AES256 encryption and a SHA512 digest:

    $ gpg --sign --encrypt test.txt
    ...
    $ gpg -vv -o /dev/null --decrypt test.txt.gpg
    ...
    gpg: encrypted with 3072-bit RSA key, ID 74588E74DDD483BC, created 2020-09-02
      "test"
    gpg: AES256 encrypted data
    gpg: binary signature, digest algorithm SHA512, key algorithm rsa3072
>Incoherent Identity

Have an identity. Have other people verify it. Trust based on that. It's the same way the PKI works, you know, that thing that runs the entire internet. Except you don't need to trust CAs anymore.

>Leaks Metadata

He's not wrong about this one, normally you can see who's ID a message is encrypted for. If you're trying to be sneaky just use symmetric encryption I guess, it feels like a different use case.

>No Forward Secrecy

Definitely a different use case. There's no case where I want to decrypt a packet from the middle of a TLS conversation a few years later. But an encrypted attachment in an old email?

>Clumsy Keys

How are GPG keys harder to handle than SSH keys? Both are just blocks of base64 (gpg --export-secret-keys -a)... one is 80 lines while the other is 50, but does it really matter?

>Negotiation

Same argument as Backwards Compatibility, I think.

>Janky Code

The page he linked has 27 CVEs. Over the last 15 years. For comparison, OpenSSL has over 200.


> By default, GPG keys expire in a year.

This speaks to the article's complaint that GPG is usually the wrong tool for the job. For example if you just need to transfer a file securely (and have a fast, reliable internet connection on both ends and don't need to worry about active tracking of metadata), you can use Magic Wormhole (or a similar PAKE system) to do it. Imagine two scenarios: one with GPG and one with PAKE, and in both cases an adversary captures a ciphertext. With GPG if they can get your private key 6 months later, you're screwed. With PAKE the keys used to exchange the data are ephemeral, and so this isn't even a possibility.

> > Broken Authentication

> I've never heard of any of this.

I believe this is referring to authenticated encryption (AEAD), which is definitely valuable and GPG does not provide. AGE does.

Most of the other stuff you mention also falls under "wrong tool for the job". If you want a better argument, I'd talk about GPG having a web of trust system built in. On the other hand I think it's an open question whether this has ever brought real value to anyone. We have enough other secure messaging systems that it's no longer necessary for a single program to get there on its own. Usually you can count on some other mechanism to confirm your contact's identity.


>With PAKE the keys used to exchange the data are ephemeral, and so this isn't even a possibility.

AFAIK this isn't really true. If the adversary captures the initial key exchange plus all the data (ie the full transaction), then later discovers your PSK, they'll be able to decrypt. The only case where this helps you is if they capture some packets out of the middle without the initial handshake.

>authenticated encryption

It doesn't matter if your ciphertext is authenticated if it's both signed and encrypted, which is what I was getting at. In a normal TLS-like encrypted conversation, yes AEAD is very useful. But it's not applicable here.


> AFAIK this isn't really true. If the adversary captures the initial key exchange plus all the data (ie the full transaction), then later discovers your PSK, they'll be able to decrypt. The only case where this helps you is if they capture some packets out of the middle without the initial handshake.

Someone can correct me if I'm wrong, but I believe the idea behind a PAKE is that the password only authenticates the key exchange and doesn't contribute to it. So if you record all transmitted data you still need to break the key exchange which should have used a bunch of random bytes from both parties that are thrown away after use. The password is only there to prevent MITM, not to derive keys.

I believe magic wormhole uses SPAKE2, which has perfect forward secrecy. When using passwords to secure transmitted files it's really important to have forward secrecy otherwise you risk the transmission being recorded and the password being attacked offline which depending on your password strength might lead to trivially decrypting the data.


> Someone can correct me if I'm wrong, but I believe the idea behind a PAKE is that the password only authenticates the key exchange and doesn't contribute to it.

That's right. From memory, the passwords are just used to do a DH key exchange. The key is entirely ephemeral. Even if the entire ciphertext is captured, and even if the adversary then gets your password, they can't decrypt. To decrypt you'd have to MITM the key exchange, which would require knowing the password before the file is exchanged.


I wish the downvoters would explain what's wrong with the parent comment, as the counter-arguments raised by parliament32 seem completely reasonable to me. Perhaps it's just the length of the comment, and they would prefer it written up as a blog post somewhere.


Don't worry, this is normal for HN. As a commenter higher up said, "People without experience come around and think they can replace an old tool with a better one". People tend to like thinking along the lines of "surely old things must be bad" and you'll get met with disagree-downvotes anytime you try to explain why you want old software in fields like crypto or internet routing. Try defending BGP in any of the DDOS or hijack threads and you'll be met with the same fate.


> People without experience come around and think they can replace an old tool with a better one

Just for posterity, the tool linked above that everyone is discussing here was written by the Go Lead Dev for cryptography and security at Google.


> I wish the downvoters would explain what's wrong with the parent comment,

One obvious guess is contradicting Latacora. 'tptacek is well known here, his name alone give significant weight to anything he writes.

In any case, they have valid points. PGP was written at a time we didn't understand cryptography as well as we do now. We can do better. Have done better, if half of what I've heard about Age is true.

Absurd complexity: We can definitely do simpler than PGP, at no loss of functionality.

Swiss Army Knife design: I think I disagree with Latacora there. Doing many things doesn't mean you have to do them poorly. There's no material difference between having 3 programs, and having one programs with 3 options, at least on the command line. If PGP does anything poorly, it's for other reasons.

Mired In Backwards Compatibility: well that depends. It makes sense that PGP can decrypt old obsolete ciphers & formats. Ability to encrypt to those same old stuff wouldn't. For instance, PGP longer be able to generate RSA keys at all. Then, one year later, once all RSA keys have expired, new PGP versions should no longer be able to encrypt to RSA keys at all. (In an ideal world. More realistically, we should wait a couple more years.) Only the ability to decrypt old messages should be kept until the end of times.

Obnoxious UX: I don't know enough to have an opinion.

Long term secrets: Sure they're bad, but I don't think we can avoid them. People need your public key to send you anything, so it can't be too short lived. My guess here is that Latacora is attacking the whole file encryption + web of trust thing, not PGP in particular.

Broken Authentication: If attackers can trick PGP decoders into decrypting forged messages, that's fairly critical, and should be fixed even if it breaks backwards compatibility (we could have an optional `-legacy` flag or something to compensate). Now if you go sign and encrypt… well there are two possibilities: if you sign then encrypt, you run into the cryptographic doom principle: the decoder will decrypt then verify, which creates the temptation to process unauthenticated data. Many vulnerabilities have been caused by such errors. If you encrypt then sign, you reveal to the entire world that you signed this particular ciphertext. Not the kind of data most people would like to leak. In my opinion what we really want is authenticated key exchange followed by AEAD. With the Noise X pattern, you'd even hide your identity from snoopers.

Incoherent Identity: Okay, they're clearly attacking the very notion of web of trust, not PGP specifically. They say it doesn't work, but I'd like to know why. First, I'm not sure I want to take their word for it, and second, the causes might be fixable.

Leaks Metadata: that one is clearly avoidable. Noise X for instance uses an ephemeral key to encrypt the transmitted public keys, and the recipient's key is implicit. Can't know who the message is for (nor from) without the recipient's private key.

No Forward Secrecy: Different use case indeed. Again, Latacora is attacking the very notion of file encryption, not PGP specifically.

Clumsy Keys: I'm with Latacora on this one. The 50 lines SSH keys are clearly RSA based, and as such obsolete. Modern keys use elliptic curves, and those take one line, which is more easily copy & pasted in various contexts. Arguably a detail, though.

Negotiation: It's more than just backwards compatibility. Backwards compatibility can be achieved with a simple version number. If instead we have a range of algorithms to chose from, things get more complicated. Now, you can't avoid the need for different kinds of encryption: you can send against a public key, or you can encrypt with a password. Possibly both. Beyond that however it's simpler to have a version number that hints at a single public key encryption and a single password based encryption.

Janky Code: Can't judge for myself. I can guess however that much of it is caused by the (ever evolving) PGP specifications themselves. Probably more a consequence of all the other issues than a separate problem. Still, I think we can do much better. I mean, I've written an entire crypto library, and people give me hell for a single vulnerability in over 3 years. 27 CVE in comparison would be worth burning in Crypto Hell for a long time.


Disagree.

There are many tools which are way older than gpg, but still in the wide use. For example, I have not heard any arguments that people should stop using things like "rsync" or "curl".


I don't see how any of the points in the article are relevant to the symmetric encryption method I posted. Yes, GPG can do a lot, and yes, some parts of it are kinda ridiculous. But symmetric pre-shared-key encryption/decryption is a solved problem, and I'd much rather trust GPG than some random's git repo.


The author of Age is definitely not some rando.

But if you like GPG then feel free to continue using it.


gpg symmetric encryption just derives a key from the password so theres no forward secrecy. Using bittorrent in particular opens you up to the risk of third parties getting ahold of your encrypted file and without forward secrecy the can attack the password offline which is typically much smaller that 256 bits.

Using Magic Wormhole would seem to be far more secure than gpg and bittorrent since it has less risk of third party interception and uses SPAKE2 which has perfect forward secrecy.

If I had to use bittorrent to transfer a sensitive file I would generating a random key and encrypt, then share the key over a different channel.


One reason age is beneficial is that it will generate a sufficiently long password for you when encrypting a file, This is a good idea if you're using a transmission protocol without a handshake. I don't think gpg has this as an option?


I use Dropbox to send and receive files that over the email/slack/etc attachment limit. Dropbox has a cool "request files" feature that let's people upload files directly to your Dropbox.

I've never needed to send TBs of data.


I asked the same question on HN about 3 years ago[0]. I'm curious to see how the answers have changed and what might have stayed the same.

[0]: https://news.ycombinator.com/item?id=15440571


SFTP Chroot server. lftp client using mirror subsystem + sftp. It is multi-threaded, even for a single file and supports rsync like behavior even in sftp chroot. I can max out any internet link using lftp (client) + sftp (protocol).


I'm partial to https://mega.nz/


When I worked as a federal contractor in 1996-1997 we used password protected FTP servers to store giant text files for import to a database. Later on they moved it to a website you login and download the text file. Since it was so long it took almost all day and if it aborted had to start all over again.

I had to keep Netscape open because it showed the status of the file download. People asked why my web browser was open all day, it was for the giant download that is part of my job.


Apparently not explicitly mentioned: rsync over ssh.

Windows now comes with real sshd - Mac has it, and linux/bsd of course has it.

For small files (less than 10gb?) generally scp - but trying to get in the habit of using rsync. Generally scp isn't a very good idea.

For larger filesystems zfs send over ssh.

For streaming: dlna/upnp over zerotier vpn.

Shame Firefox send service imploded - for smaller files and where ssh isn't an option - it was a nice service. But a little too much hassle to self-host.


Bike courier with checkered past, simultaneously on the run from organized crime clans and mercenaries working for retired spies, probably has mirror shade implants, lives in squat, punk haircut, etc.

Didn't realize there was another way.


USPS


Resilio Sync (formerly bittorent sync) is my go-to for any file larger than a few hundred megabytes.


Same here. I use this to deploy large files to my VPS


I third this suggestion, if you have to send the file to multiple parties they can all share among themselves (it's BitTorrent after all) speeding up the process even further.


WDT from Facebook - https://github.com/facebook/wdt/

We use it for copying hundreds of TiB's regularly, occasionally over a PiB.


https://Sendgb.com is good way over the internet. Possible to send large files up to 20 gb. No needed registration or sign in.


10TB or more, not sure. Anecdotally I did have to send GBs of home directory locally recently. Fastest way I found out the hard way was mounting SMBFS and `tar` it there. But of course it is unreliable without resume etc.

For the thing you did, these could have worked.

https://file.pizza

https://instant.io

https://www.sharedrop.io


Not something I've used, and it's difficult to countenance anything called that, but what Globus (globus.org) has become seems to be quite popular for transferring large research datasets between academic sites which subscribe.

For ssh, there's https://www.psc.edu/index.php/hpn-ssh for improving wide-area transfer speeds, and something else, I think.


Since 5GB would take quite a long time to upload on DSL: almost universally dropbox, simply because it resumes + it's accessible to people with a url. If the recipient is ssh-friendly and there's a shared machine we can both access, rsync is quite a lot faster and more controllable and doesn't make my CPU angry for hours.

Otherwise I liked Firefox Send while it was running since I mostly trust them to not be full of intrusive nonsense.


I run FileBrowser [1] to share media with friends. I would suggest trying IPFS if you don't want to forward a port from the internet. You'll get similar download performance to FileBrowser once your node integrates with the network (this takes 30 minutes or so). Check it out!

[1]: https://github.com/filebrowser/filebrowser


I and my customers all have symmetric FTTH. When I'm copying a file from me to me (working remotely), I use a VPN. These are usually .vhdx files, ranging from 6GB to 300GB. I copy them to here for troubleshooting, then I send them back.

When I want to make something large available to someone else, I post it on my local webserver. It points to an NAS on my LAN; anyone in the house can just drop whatever on it and hand out a link.


Syncthing works pretty well in my experience, even across NAT. I recently shared 10GB worth of photos with a family member and ~30min later it was done.


Worth noting that there's an expensive but very popular product from IBM called Aspera

I've never used it myself but I've heard but it's actually quite impressive but the business model makes it inaccessible to individuals

Apparently they do some sort of complicated Trace routing and open a ton of simultaneous routes. Probably also uses something other than TCP 4 correction.

Seems like a good place for some disruption


- Airdrop for immediate stuff. Gets dicey around 30-50GB tho.

- iCloud drive for personal stuff I need to share between devices. I trust this to sync anything up to a TB reliably.

- Google Drive when I need to share to someone else.

- Occasionally external drives when I need to move data fast locally.

- Some combination of S3 / AWS CLI / EC2 when things go beyond personal computer capacity depending on where the data is coming from and going to.


For things smaller than around 10gb, I tend to use S3. Either with pre-signed links behind a little bespoke portal that people can log into and see what files I've "shared" with them, or just direct S3 links for those that what to automate it all.

For bigger things, there are two basic paths:

If it's to someone that's not primary a tech person, then the data goes on a portable hard drive, and I either drive it to them, or mail it and walk them through accessing the data. In both cases I encrypt the data, generally with Veracrypt.

If it's to someone that won't mind using AWS's toolchain, and has decent connectivity, I'll use Snowcone or Snowball to get the data to S3 and give them S3 URLs.

I tend to get the data to S3 sooner or later so I can migrate it to Glacier. More than once I've had to recover GBs or TBs of data because the customer lost it, so I'm prepared now.


I have used Hightail for years, from when it was called YouSendIt. Here is info from their website on plans and pricing

https://www.hightail.com/file-sharing

Hightail offers four different subscription levels for file sharing, with varying restrictions on file upload limits and storage. With our free Lite plan, you can upload files up to 100MB each, and have a storage limit of 2GB. Pro plans, which start at $15/month, allow you to upload individual files up to 25GB each. Teams plans, which start at $30/user/month, allow you to upload individual files up to 50GB each. Business plans, which start at $45/user/month, allow you to upload individual files up to 500GB each. Pro, Teams and Business plans come with unlimited storage.


magic-wormhole: https://magic-wormhole.readthedocs.io/en/latest/

  pipx install magic-wormhole
  wormhole send /some/big/stupid/thing.tar.gz


Also install microsoft visual c++ build tools which is like 1.2GB :(


Someone proposed WeTransfer but there is also Tresorit Send (https://send.tresorit.com/) who does the exact same job. Bonus point: they say they encrypt the data and their servers are hosted in Switzerland.


For files less than 1GB I tend to use Telegram. It obviously has the downside of uploading to server X and then downloading from server X, but usually if I am sending a file to one person, I'll likely be sending it to others too, and therefore the ability to forward the file to others arbitrarily after the fact actually proves to be a pretty useful quality. If I care about the datas security, an encrypted 7z container or something will do.

For files I need more control over that are less than say 5gb, I tend to scp them to a web server I control, so that I can delete them afterwards.

For files larger than that, I'll use a private Bittorrent file. It's very rare I need to transfer files this large, but I really like this solution.


Inside a company, I like transfer.sh, which is like an open source version of file.io - * https://github.com/dutchcoders/transfer.sh

That's good for the 5G +/- file transfer.


Personally I would use my own Nextcloud instance for up to 20-30GBs. Not sure about TBs.

What about using "Firefox Send"? (I never used it so far)

https://support.mozilla.org/en-US/kb/send-files-anyone-secur...

I read that the limit is 1-2.5GBs => maybe you could break down the file and upload it in multiple pieces... .

EDIT: oopps, Firefox Send doesn seem to be available anymore - https://support.mozilla.org/en-US/kb/what-happened-firefox-s...


People were using Firefox Send to "send" malware and other bad things to people. That's what happens if you let anyone anonymously host files on your domain for free.


And how other platforms, like wetransfer, are dealing with that?


There are many ways of sending big files over the Internet, so I will go into the top three, save time. The first one is the shortest, when using Gmail. Click on the drive button instead, rather than just the paper clip button you will be using with regular attachments. It provides an outline of your drive files, from which you can pick the file to import. Just as easy.

You should go to another cloud computing provider if you don't like Google Drive.

My third alternative is even more easy, but it's a little sluggish because it cuts you at two gigs either way. It is a service called WeTransfer, which was almost your only choice 15 years ago if you had to submit a huge file.


I like gofile.io, as it's private and has no limits.

But for something like 10 TB or more, I'd see a torrent as the only way. My uploads speeds are too slow for anything else, the connection would be reset. The torrent also helps prevent corruption.


Magic wormhole - it traverses NATs, is encrypted, requires configuration of the source or destination.

https://github.com/warner/magic-wormhole


Do you mean requires no configuration on the source or destination? :)


Installing it on windows required me to install visual c++ build tools, which is a beast of 1.2GB. So yeah it requires a lot. I rather recommend croc, just dropped it on my path it Just works.



Being on windows is its own reward.

Snark aside, Croc does look a lot like magic wormhole. I'll definitely have a closer look at it.


:p little configuration


When I'm the sender, I use an http server in the current directory using http-Server (node) or http.server (python 3).

When I'm the receiver, I'm using a self-written CLI tool "send me a file" (smaf): https://github.com/nikeee/send-me-a-file

It basically opens up an http server on my machine and let's a user upload a file via curl or a web interface. It also computes a sha256 hash while receiving.

These methods only work on a local network. I use my VPS and sftp/http for transfers descending the local network.


S3. Sometimes I do that even when I'm transferring on my local network just because I know the flow so well. After all, my speed to the Internet is roughly the same as the speed on my LAN. They're both gigabit duplex.


You config IAM perms every time?


I actually just use https://docs.aws.amazon.com/cli/latest/reference/s3/presign....

Then I copy-paste that into a self-Slack and `wget` it on the other side.

It's almost equivalent ergonomically to a `python3 -m http.server` and because it's the same whether I'm giving to someone else or to myself, I spend less thinking!


For anything 5GB plus - external drive & cycle over

The only person I'm sharing with lives near me, so sneakernet is the most convenient.

I once hit a transfer speed of 30TBs/hour carrying a box of hard drives home from work.


"Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway."

– Andrew Tanenbaum, 1981


I used to do the HTTP server approach when I was a student at MIT and had a static IPv4 and symmetric gigabit in my dorm room and therefore the recipient could download at usually their full downlink speed.

Now I live in the heart of Silicon Valley, a couple km from Google's Headquarters, and have a crappy 15 mbps uplink with no other options available to me, so I typically throw the file on Amazon S3, Google Drive, or Dropbox before sending a link to the other person so that they don't have to put up with a 15 mbps download.


I am actually transferring 400Gb of data across the world right now. I decided to use Apple iCloud and it works pretty good.

Too bad the laptop I am transferring from in a bad condition and with Windows so sometimes iCloud service is freezing and I need to restart it.

I tried to create torrent but it seems it takes forever on such a weak machine.


Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: