"You can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem."
FTP is just a horrible and massively outdated protocol which we really shouldn't be advocating for the modern internet[1]. Particularly when everything you've suggested can be much more cleanly done using existing tools on Linux / UNIX / OS X: scp, rsync, git, sshfs (some of which are often pre-installed on the *nix image)
Note that all these require transferring the file to the server first, before a link can be shared with anyone, and especially cleaning up later. With my tool, the link is instantly available, and as soon as I stop the program, nothing is left anywhere, especially not on the server.
Didn't Dropbox's share links more or less solve this problem? Maybe not for 100% of the population, but I've managed to walk my mom through the process, and isn't that generally the success metric for things like this?
Dropbox has a storage limit, requires an account and is non free software.
The usage proposed here is a "no-strings" attached approach.
Ideally you want something you can trust, hence free software, and that does not store anything.
I'm pretty sure they did it wrong then. You can share a link to any file inside of your dropbox, and anyone can download it regardless of whether they have an account.
The problem isn't 'sending a file', it's 'peer to peer communication', which is hard without an intermediary and/or shennanigans due to ubiquitous NAT. I image in the everyone-has-an-IP universe of ip6, this will no longer be an issue.
There is strong technical obstacles but yes the problem is more or less solved in different ways.
The point of Randall Munroe here is to emphasis the gap between the fact the Internet made possible a lot of incredible things but sharing a simple file is still a hassle. And IT IS, creating an account is not what you can call a no-hassle process (I would probably say, do not forget about Term Of Services, but I understand no one read them nowadays).
> directly-accessible-from-anywhere computer on a network
followed by
> python -m SimpleHTTPServer
is a bit of a contradiction.
Dropbox has some pretty serious limitations for transferring files; you're limited on file size, you have to upload to their servers, you're subject to their definition of objectionable content, and sharing a particular file too much (for their definition of too much) will cause them to take down the file.
My suggestion for SimpleHTTPServer was in response to the blog post.
>Dropbox has some pretty serious limitations for transferring files; you're limited on file size, you have to upload to their servers, you're subject to their definition of objectionable content, and sharing a particular file too much (for their definition of too much) will cause them to take down the file.
I was being a bit idealistic, but in theory that's the story.
Yeah, xkcd handwaved away a couple of other solutions too: use a chat client to transfer a file? "no-one uses this obsolete one!"; use a file upload site? "I'm scared of porn popups and they don't work properly anyway"
On Windows, I run a local torrent tracker (built into uTorrent: bt.enable_tracker true) and share the magnet links.
The disadvantages are that I need to create a torrent file with my ip in announce field for each file I want to share and that the recipient must have torrent software (all of my friends incidentally do).
The advantages are built-in checksumming, no intermediate upload server, interoperability with various torrent software, ability to stop and resume the transfer at any time and very good transfer rates (for some reason I've had terrible performance with those browser-to-browser services like e.g. sharefest)
Quick and easy file sharing is hard to do not only from a technical standpoint, but also from a legal standpoint. It only takes one DMCA lawsuit to shut down a file sharing startup. The likelihood of a lawsuit increases as the file sharing becomes easier, quicker, and more anonymous (as the chances that someone will use it for illegal purposes increases the easier and quicker and more anonymous it becomes).
You have to set some restrictions on who can share what with how many people and how often. Even torrent trackers that do not store any files other than the tracker become targets. I really would love it if there was a way to type "scp /home/freehunter/your-granddaughter-laughing.mpg //grandma/home/grandma/videos" with UPnP to get around grandma's dial-up NAT. Unfortunately, the closest we have is Dropbox (requires an account) or BitTorrent (requires dodging law enforcement and porn popups).
As far as quick 'n dirty solutions go, if you're not overly concerned with security, you can always use "python -m SimpleHTTPServer" to serve all the files on the current directory over HTTP.
I also recall that KDE had a panel applet that pretty much did the same thing on the 3.x days (not sure it's been ported to 4.x)
Do not forget that this solution is inherently mono-threaded and does not benefits from a proper concurrency model like node.js for example (which is also mono-threaded by "design" but provide non blocking IO).
So for big files with multiple downloaders, it can be sometimes not as good as you want it to be.
I believe he's connecting to a SSH server, and the server is proxying the request back to a local HTTP server. So there's no need to mess with port-forwards.
With things like google docs, I've found that file sharing has become a lot less necessary. We're sort of moving into a very file-less space, and I'm kinda fine with it.
So he's built a file sharing tool which requires an SSH server to work? So why not just use sshfs, scp or rsync?
I don't mean to stamp on someone's creativity, but SSH already has a whole bunch of tools for transferring files - all of which already work pretty well.
Because with file-share, the file is never stored on the ssh-server; it is transferred the instant the other sides starts the download. The SSH-Server is required to share the file at a publicly available address (which my own computer is likely not). So no need to clean up files after the download, for example, or even to have any free disk space.
If you need an SSH server to begin with, then there's still some setting up you need to do before hand. The pain of having to delete a few files afterwards is really nothing compared to renting a Linux box, setting up SSH accounts with fail2ban and/or SSH keys; and so on.
So file-share doesn't even work as a pain free solution like you suggest it does.
As I said before, I don't like to stamp on another's creativity. I'm just saying that there are better tools for doing this which already come pre-installed with most Linux distros, UNIX's and OS X.
Why would you have to mess with SSH account_s_? Only the sending side needs an account, no further setup on the server required. It doesn’t even have to be your server, your university’s server is enough.
Most people are not at uni and rented boxes usually have root open by default expecting customers to then disable root access and create a user with su / sudo rights. Plus I tend to take things further and create a dedicated groups for ssh so that rsync / sftp users cannot use a system shell nor port forward and are chrooted while a very small subset of administrators have full ssh access.
Plus even if you do borrow someone else's box, can you be sure that they'd even be grateful that you're using up their bandwidth?
Ssh is a dangerous protocol to leave open to the WAN, so setting up an ssh server properly, while not a time consuming job, will still take longer and more effort than simply deleting a few files afterwards.
Dropbox solves the problem of sharing a file I have, but on the receiving side of the equation, dbinbox (http://dbinbox.com/) is probably the easiest solution for non-technical folks.
WeTransfer (wetransfer.com) is pretty popular in the Netherlands. It's easy to use, only requires an e-mailadress and allows you to send files up to 2GB and as an added bonus it's quite pretty. I really love it's simplicity.
WebRTC is actually a C++ library [0]. One could implement a "share-file" command-line tool, which uses this library to share files. Other would have to use another tool for "recv-shared-file", which probably takes a hex-id and spawns a local web server or some other gui.
It is way more complicated than that. WebRTC is all great and shiny but there is strong obstacles to build correct data transfer applications on top of that:
- You still need a server to do the "signaling" part
- NAT are everywhere and will fail your p2p connections sometimes (14% of WebRTC calls according to Google, back in may) [1]
- Ultimately there is a problem with JavaScript API that does not allow you to download big files seamlessly
The signaling part can be solved in many ways but the server approach is the easiest and probably the most efficient/reliable one. For more informations I recommend checking an article on the infrastructure needed for WebRTC applications [2]
The NAT problem is solved by ICE via STUN and TURN [3]. TURN is the ultimate fallback, it is a media relay.
TURN servers are bandwidth expensive and most of them require credentials in a way that does not fit WebRTC properly.
Finally, there is the browsers problem. None of them provide a proper and standard way to give you the ability to download a stream of data coming from JavaScript (although after checking the issue, sharefest may have solved this problem, but I don't know if it works properly with firefox). There the File API in Chrome but it is not supported in Firefox. There is the blob URI but you need the whole file to create this object. This mean that your application will slow down your browser in the case of a big file sharing [4].
Ideally I wanted to see the stream API be implemented but it seems it's not going to happen, so we need something else [5].
In conclusion, no, it's not as easy as you think it is. To have a proper file sharing application you will need to address ALL of these problems.
Moreover, it's good to note that these problem are not specific to WebRTC applications. Indeed, most of these are problems we have with the Internet in general. WebRTC is just an API on top of existing solutions.
I hope this answer was not too negative but informative enough :)
Having used webRTC with the TURN server REST API (multimedia conferencing app), it's not that bad.
STUN will get though the NAT most of the times. With the latest browsers, Firefox to Chromium is a bit dodgy, but there are workarounds. But Chromium to Chromium is perfectly fine.
A neat solution indeed, but for anyone capable enough to setup what you've concocted, why not just scp the files on a lowendbox.com VPS. You also benefit from an intermediary with a decent up and down connection for the person you're sending to.
https://news.ycombinator.com/item?id=9224 :)