
Transfer.sh – File sharing from the command line - nilsocket
https://transfer.sh/
======
qrv3w
There are so many tools that can transfer files between two computers. I
really like ones like this because you don't have to have SSH access or
forward any ports to send a file from A to B. Its similar vein to other peer-
to-peer utilities like zget [1], sharedrop [2], instant.io (webtorrent) [3],
filepizza (webtorrent) [4], magic-wormhole [5], toss [6], dat [7], and many
many others.

During Hacktoberfest I also started my own, written in Go, so I could have my
friends can use it without installing a Python ecosystem [8].

[1]: [https://github.com/nils-werner/zget](https://github.com/nils-
werner/zget)

[2]:
[https://github.com/cowbell/sharedrop](https://github.com/cowbell/sharedrop)

[3]:
[https://github.com/webtorrent/instant.io](https://github.com/webtorrent/instant.io)

[4]: [https://github.com/kern/filepizza](https://github.com/kern/filepizza)

[5]: [https://github.com/warner/magic-
wormhole](https://github.com/warner/magic-wormhole)

[6]: [https://github.com/zerotier/toss](https://github.com/zerotier/toss)

[7]: [https://github.com/datproject/dat](https://github.com/datproject/dat)

[8]: [https://github.com/schollz/croc](https://github.com/schollz/croc)

~~~
nneonneo
Shameless plug: I wrote ffsend
([https://github.com/nneonneo/ffsend](https://github.com/nneonneo/ffsend)) to
interact with the Firefox Send experiment (send.firefox.com). With this, you
can upload a file which is end-to-end encrypted (i.e. it is uploaded
encrypted, and only you have the key), and accessed via a simple URL that you
can share with your receiver.

FF Send files last for 24 hours, and you can configure the number of downloads
allowed from 1 to 20. The maximum filesize is around 2 GiB. The reason I wrote
ffsend is that the official site loads the entire file into memory in order to
en/decrypt it, but my script is able to stream the en/decryption and thus
significantly reduce memory usage.

------
Rjevski
They really should’ve used some end to end encryption to protect themselves
against DMCA bullshit.

As it stands they will be abused for copyright infringement and the rights
holders will not only ask for the files to be taken down, but for them to
preemptively prevent those same files from being uploaded again. A huge rabbit
hole full of bullshit.

~~~
nilsocket
Indeed yes, but files were hosted for 14 days.

~~~
jgrahamc
I would be much more concerned about people using this site for the transfer
of illegal material (such as child abuse imagery). That's something literally
any file sharing/image hosting site has to deal with. The 14 day limit won't
make a difference as people disseminating that type of material are likely
used to having to move it around frequently.

~~~
gorbypark
I built a proof of concept project that was a lot like transfer.sh at one
point in time. My solution was to not only have a time limit but a download
limit as well. I'd guess most people transferring files from the command line
want to transfer something from one machine to another, or maybe to a few
other machines. I was going to allow a transfer limit of 10 transfers before
the file/link went dead, which would hopefully deter most nefarious people
using it to widely spread files around..

~~~
vageli
Unless you are storing hashes, what would prevent someone from writing a
script that simply reuploads the file to your service after downloading? That
would increase the share limit.

~~~
tafycent
Let's say you generate the identifier for the upload based on the file hash.
If you add a timestamp or random nonce then they'll have to redistribute the
link to the file every time they re-upload as it will change every time.

~~~
gorbypark
Exactly. Nothing would stop someone from re-uploading it, but they'd end up
with a different url which would go dead pretty quick if used to share
something publicly.

------
spurgu
What about deleting? If you accidentally upload your porn folder and notice it
just after sharing the link with someone.

~~~
grenoire
Sharing is caring!

------
msravi
woof -i <ip_address> -p <port> <filename>

woof: [http://www.home.unix-ag.org/simon/woof.html](http://www.home.unix-
ag.org/simon/woof.html)

1\. Allows directory upload/download (tar/gzip/bzip2 compressed)

2\. Local file server (doesn't go over the internet)

3\. Allows upload form (-U option)

4\. Allows file to be served <count> number of times (-c option)

~~~
auscompgeek
Another alternative for simply serving your current directory which you
probably already have:

    
    
        python3 -m http.server
    

or

    
    
        python2 -m SimpleHTTPServer

------
ktm5j
I like it! But for me ssh already provides a simple and secure way to move
files from one place or another..

    
    
      tar -cf - ./files.txt ./orDir/ | ssh host "(cd /dest/dir; tar -xf -)"
    

Given that ssh it so ubiquitous I think this will always be my go to.

~~~
Giroflex
What about good old scp?

    
    
        scp file.txt user@host:/dest/dir/

~~~
alex_young
Or rsync?

rsync -zvh file.txt user@host:/dest/dir/

So many ways to do this :)

~~~
coatmatter
rsync has always been my favourite because it makes the most sense to me (and
the --help/man page is easy to read).

rsync -n -avh --progress source destination:~/asdf/ for a dry run followed by
ctrl-p, ctrl-a, alt-f, alt-d, alt-d to remove the -n flag and then execute
that for the real thing.

Occasionally though, I'll also use sftp if I'm just pulling one thing -
perhaps even after sshing to the remote machine.

For all of these, SSH keys should be set up (and desktop logins secured) to
make life easier.

As for Android, adb push and adb pull -a seems to work better than mtp:// or
AirDroid in my experience.

~~~
LukeShu
After all these years, I still can't keep straight when I need a trailing
slash in rsync, and when I need to not have it.

~~~
coatmatter
If you think of it in terms of archives and whether you want to "extract" into
the current directory, or a new directory within the current one; that might
help.

rsync source destination will plonk the entire source directory and put it
inside destination as a neat bundle.

rsync source/ destination will take the contents of source (but not the
directory source itself) and plonk it in destination

I found the info page a little dry but it does describe it succintly:

    
    
        rsync -av /src/foo /dest
        rsync -av /src/foo/ /dest/foo
    

For some reason though, my head freaks out when it sees "foo" and "bar", but
all they're saying is that it does the same thing.

If in doubt though, just chuck everything into the destination ~/temp/ or
~/asdf/ and sort it out later.

To be honest though, most of the time I just use fish shell's autosuggestions
to guide me along.

------
red_admiral
There used to be a service called chunk.io that did this. Then, presumably by
some combination of becoming popular causing them bandwidth/storage issues or
their service being abused, they had to make it invitation-only.

(The site still exists, but they never replied to my e-mail to their signup
address, so I can't say for sure if they're still live or not.)

I wish transfer.sh good luck, and will bookmark them for now as "the new
chunk.io".

EDIT: note, as not all comments comparing this to SSH seem to have picked it
up - this is a service where you can upload a file, get a link and e-mail the
link to someone. You don't need to have any special software (such as sshd)
running on the download side.

~~~
int0x80
There are multiple sites like this. I dont know the size limits (I just upload
small stuff) etc but here are a few I use:

[http://ix.io](http://ix.io)

[https://ptpb.pw](https://ptpb.pw)

------
robert_foss
Personally I've been using magic wormhole lately. It's p2p and very easy to
use.

[https://github.com/warner/magic-wormhole](https://github.com/warner/magic-
wormhole)

~~~
StavrosK
I love magic wormhole, you give the recipient three words and a number and the
file is sent peer-to-peer. No messing about with routing or anything.

~~~
3pt14159
How is it for getting around weird networks (double NAT, etc)?

~~~
deathanatos
> _The library depends upon a “rendezvous server”, which is a service (on a
> public IP address) that delivers small encrypted messages from one client to
> the other. This must be the same for both clients, and is generally baked-in
> to the application source code or default config._

> _This library includes the URL of a public rendezvous server run by the
> author. Application developers can use this one, or they can run their own
> (see the[https://github.com/warner/magic-wormhole-mailbox-
> server](https://github.com/warner/magic-wormhole-mailbox-server)
> repository)_

> _For now, bulk data is sent through a “Transit” object, which does not use
> the Rendezvous Server. Instead, it tries to establish a direct TCP
> connection from sender to recipient (or vice versa). If that fails, both
> sides connect to a “Transit Relay”, a very simple Server that just glues two
> TCP sockets together when asked._

If I understand the docs correctly, it always uses a centralized server to
establish the transfer. Once the transfer is established, it'll attempt to
transfer the files directly, if possible, but if not, it'll fall back to using
a relay.

And so many people are trapped behind NAT these days, I don't know that the
need for this will be all that unusual.

------
devilsenigma
There's [https://send.firefox.com/](https://send.firefox.com/) as well. Backed
by Mozilla.

------
rockymadden
I made a minimal bash CLI for transfer.sh, since I regularly interact with it.
Nothing you couldn't do by hand, but it makes it easier to do some operations.
Uploading directories, encryption/decryption, piping, etc.

[https://github.com/rockymadden/transfer-
cli](https://github.com/rockymadden/transfer-cli)

------
jy3
Of course it's written in Go.

It's amazing to see the language embraced that much for server side apps.

~~~
kim0
Static binaries are magic :)

~~~
aryamaan
Could you please go into details like what's basic difference b/w static
binaries and apps which use dynamic linking.

What are the benefits of one over another?

Something like grokking this concept for once and all :D

~~~
creeble
Simplest explanation I can think of: Static binaries have no dependencies.
They should just run and not bark about missing (shared/dynamic) libraries,
nor require you to install them.

Of course, even static binaries rely on some basic level of compatibility;
typically system-level things that don't change much.

Dynamically-linked binaries have the potential to create a massive dependency
graph that can hard or even impossible (for a given o/s installation) to
traverse.

~~~
kikoreis
Those annoying dependency graphs provide both standardized visibility and also
the ability to fix and patch components independently of each other.

~~~
tafycent
Which makes sense for large software systems, but for small tools that you
might want to carry around on a flash drive or that you need to always work
across multiple machines without having a vm, static binaries make sense.

------
shawn
I prefer magic-wormhole.

    
    
      pip install magic-wormhole
      wormhole send foo.tar.gz
    

Bonus: the files are e2e encrypted.

Still, transfer.sh is hard to beat for flexibility. Magic-wormhole requires
that users install something.

------
telesilla
I had this problem yesterday, misplaced USB drive and needing to do a large
transfer. [https://www.sharedrop.io](https://www.sharedrop.io) to the rescue.

------
amelius
I sometimes get such links in my email box.

Then when I browse my email months later, I can't use the links :(

I wish tools such as this one would automatically incorporate the downloaded
files into my email history somehow.

------
dwelch2344
Wrap it in a `keybase encrypt ...` and it's a pretty solid solution

------
jedisct1
Or use Piknik
[https://github.com/jedisct1/piknik](https://github.com/jedisct1/piknik)

------
htor
scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp,
scp, scp, scp, scp, scp, scp, scp, scp, scp

------
cjhanks
I keep seeing these pop up over and over, and I think most I have seen use a
rendezvous that is also an I/O relay.

A reliable UDP with a rendezvous server would allow for much more scalable P2P
transfer. Unfortunately, I haven't found one implemented like this...

------
presto8
Fiche is another solution, based on netcat (nc):

[https://github.com/solusipse/fiche](https://github.com/solusipse/fiche)

------
amelius
Can it have a little more security please? E.g. longer filename, more
difficult to guess. And an encryption option would be nice.

~~~
petee
Well it seems you'd have to not only guess the file name, but also the 5
character code; as it appears its uppercase, lower and numbers, which is ~916
million possibilities

...although on second thought, and with some bad math, if you know the file
name, and you can manage 750+ tries a second, you could brute it prior to the
14 day expiration.

~~~
derimagia
Hopefully they use something akin to fail2ban..

------
amarant
if you happen to be on the same network, airpaste is pretty neat too:
[https://github.com/mafintosh/airpaste](https://github.com/mafintosh/airpaste)

------
NVRM
Btw, many unix files explorer can connect to many remote servers via ssh.
Unless x less of course. All is integrated, local programs can read and edit
those files as regular, drag and drop, folder, permissions with right click. I
use the shell a lot, but remote bookmarks are wizardry!

~~~
furgooswft13
And of course there is always sshfs

------
alexgvozden
I love using this for simple transfers I need from computer to server

------
runj__
Wow! I've been looking for something like this since forever! scp really
doesn't cut it for servers only accessible from other whitelisted servers.

~~~
pmorici
That’s what the SSH ProxyCommand configuration directive is for.

------
paol
Have used it before. Works well.

------
xwvvvvwx
ipfs add <file> works remarkably well in my experience.

------
danielecook
How is it free?

~~~
amelius
Because it's not massively used yet.

------
jchook
Upload all your files to our internet web server for FREE!!1

------
unhammer
On a related note, I recently learnt that if you're on the same local network,
there's a much faster way to transfer than the old tar|nc trick[0]: udpcast.
You do

    
    
        $ udp-sender --min-receivers 1 --full-duplex --pipe 'tar czvf - theDirectory'
    

on the sender and

    
    
        $ udp-receiver --pipe 'tar xzp'
    

and at least on my home network it's 11x faster than tar|nc. There are some
caveats[1] about udp not working well everywhere, and you may have to open
ports 9000 and 9001 and of course it's not encrypted at all, but for copying
large ISO's and such when you can't find your usb stick it's great. Just
remember to compare checksums afterwards.

[0] [http://www.spikelab.org/blog/transfer-largedata-scp-
tarssh-t...](http://www.spikelab.org/blog/transfer-largedata-scp-tarssh-tarnc-
compared.html)

[1] [https://superuser.com/questions/692294/why-is-udcast-is-
many...](https://superuser.com/questions/692294/why-is-udcast-is-many-times-
faster-than-netcat)

~~~
userbinator
_Just remember to compare checksums afterwards._

Preferably a cryptographic hash... UDP is known for not being reliable at all,
and that's partly why it's so fast --- the sender doesn't care whether the
packets reached the receiver, it just sends as fast as it can.

~~~
mbrumlow
Well sort of. The sender will in a lot (most) of cases care, its the protocol
(UDP) that has no automatic checking, or reporting. This sort of stuff is left
up to the programmer.

The power of UDP allows the sender to have more control on things like how
often transmissions are are acknowledged (tcp window size), or how to handle
delays or errors. There are also some advantages because middle boxes who try
to be smart and "make TCP" better for you can't really muck with the UDP
packets all that much because the applications own protocol of how to handle
UDP packets will not likely be know. This is why QUIC is such a big deal -- as
a lot of the type of things a middle box might want to -- and do on TCP today
-- muck around with are encrypted.

So I would not say that UDP is fast because it is not reliable, it is fast
because it can allow a programmer to exploit the network in a more efficient
way than TCP can for a specific type of data being transferred. There are many
reliable UDP based protocols that achieve faster speeds than TCP in different
situations.

