I liked to leave a file in the top level public ftp directory called "README" that contained just the line:
README: No such file or directory.
Somebody once emailed me complaining they couldn't read that file. I told them to run "emacs README" because emacs could solve any problem. Never heard back!
In my humble opinion, i find the label 'not encrypted' better fitting than 'not secure'.
[ edit after replies ]
Agreed, perhaps i unrightfully assumed people now what 'encrypted' means. Growing up with computers i don't know any better and sometimes forget or can't properly imagine what layman know and don't know.
HTTPS is more than just "encrypted", it's also "authenticated". That is, plain-text HTTP and FTP is not only "Not Encrypted", but also "Not Authenticated"; saying "Not Secure" captures both properties at the same time.
"Authenticated" is debatable. Someone has spent a few minutes on a third-party website to enable the green checkbox. Whether that means they are who you think they are is another matter.
Yeah, it's authenticated but only to the degree that you know that they own the domain name, and there are no other guarantees that they are who they say they are. Secure is way too strong a word.
IMO, "Not Secure" means a lot more to the layperson than "Not Encrypted". They're trying to push HTTPS adoption by punishing websites that don't use it.
Sure "Not Encrypted" is more accurate, but not everybody is going to know what that means.
HTTPS is not a file transfer protocol, nor is it a network service designed for transferring files. People don't realize this until they need to send several gigabytes/terabytes of files in different directories with different permissions owned by different users, and suddenly they need to implement an FTP server over HTTP. Not to mention server-to-server syncing.
FTP is still very useful, and is depended upon by tons of corporations for bulk transfers and sensitive transactions. It is trivial to add SSL to FTP, or use SFTP if your clients/servers support it.
Except every ftp server I've used hasn't gone past a few hundred kb/s even though on both ends the connection can handle much more. I end up having to use lftp (pget -n #) to download files from them, since no UI has multiple connections implemented properly.
Firefox Send[0] is a pretty easy way to do one-to-one file sharing. As a bonus the file is encrypted before being transferred and decrypted after download.
Creating and maintaining a self hosted and completely self owned system that's secure and functional is a fun project but it is hardly necessary for one-to-one file transfer.
So yes I'm "peddling" a free web service over a DIY approach, especially as securing a public server isn't quite as trivial as you make it sound. It does take knowledge and sustained effort.
It depends on what services you are exposing. If it's just SSH and HTTP, the risk and difficulties are really low if you follow a few simple rules. The bonus is that you can learn many interesting things in an experiential, personal way.
This kind of approach is quite common nowadays, not only because people have different priorities, but also because there are strong financial incentives to promote "products and services" (websites that help you do something). The rest of us is happily using self-hosted services wondering how anyone else is handing their own data on a plate to complete strangers.
It exists because HTTP is not a good replacement for it.
It's especially mediocre when you want to download/upload multiple files like a directory with many files inside of it.
A friend recently shared with me his file collection, through OwnCloud. It was such a pain to download. Each file is a separate link. I can't use wget to pull it, because of authentication. It had some cli tools, but it was buggy and couldn't make it work. And of course no ftp. Essentially it forced me to click every single file. I couldn't even verify the files downloaded correctly.
FTP is actually great for what it does and with clients like lftp it's fairly enjoyable to use. If you want to kill it, better make a worthy alternative.
Besides the lack of encryption and proper authentication in the base protocol, it also has a two ports requirement that make it troublesome with firewalls/NAT.
The two port system allows a separate, text based control protocol with the ability to transfer files between two remote systems (FXP). These features don't really matter anymore and the second one is a security nightmare.
A better alternative would be SFTP, which is based on SSH and is pre-installed on most UNIX-like system. The only problem is that web browsers don't support it natively.
Is that more common than HTTPS, simpler, and a standard feature of FTP? I daresay not. In other words, you're making up a convoluted scenario and pretending it's still plain FTP, even though your toolchain got far more esoteric.
It's esoteric because proper key management is a fulltime job at best.
And every attempt at doing key management better is met with hostility from many in the community (rightfully or otherwise) as they tend to sacrifice "absolute security" in the name of greater "usable security" (often by centralizing the trust system in some way).
At least in Canada the last bank integration I worked on was FTPS (FTP over SSL). Files were otherwise in plaintext with account numbers and everything.
There were a few blog posts posted here from a payroll company not too far back. They explicitly kept using the term “SFTP” so I have some hope that at least a good portion of banks aren’t completely broken.
FTPS suffers from all the same problems as FTP (lack of definitive standard for FTP commands nor their returns, poor client server relationship due to the data port being a callback, no standard for native compression, etc) bar the clear text problem* . But ironically the encryption then creates a new issue with clients behind NATing due to the aforementioned lack of client/server distinction.
I don't think there is really much excuse not to run SFTP instead of FTP(S) given that most clients already support it. Or use one of the numerous other sharing / transfer protocols available to us now.
...or virtual machines, containers, or even just defining valid groups in sshd_config. There is a whole multitude of ways of separating out SFTP users, it's really not a hard job to do.
Equally a lot of Linux FTP daemons defaults to using /etc/passwd users too.
So What you're describing is a configuration detail the administrator needs to be aware of regardless of what transfer server they set up. In that regard FTP is no different from SFTP (or vise versa)
I'm not sure whether I would pick FTPS or SFTP as the superior protocol for automated file transfer (HTTPS is rather poor, although HTTPS-to-S3 is getting close).
On the one hand, SFTP has built-in support for key-based security (client authenticates with its own key, and checks the server's key), but the protocol itself is a mess that pretty much assumes the server is a UNIX box with a typical file system. We had the interesting challenge of writing a windows-based SFTP server that stores data on Azure Storage, and even the most surprising things (like one command to create an empty file, followed by another command to upload to that file) turn into a silly mess, not to mention the difficulty of setting up an SSH server on Windows.
FTPS gets those things right: security is based on SSL (more implementations available) and most file-altering operations are done with single commands. On the other hand, it's still password-based (although encrypted) and the data-channel-versus-control-channel problem becomes a mess with load balancing.
No support for directory listing, no support for "last modified date", no support for key-based authentication.
Of course, you can use WebDAV (but it's not HTTPs) or do kinda-FTP-over-HTTPS by listing directories in a specific format (but it's not a standard anymore).
Upload is too simplistic by default (input type=file), other solutions (html5) are somewhat problematic to implement. Also, recursive download requires some thinking.
There's also webdav, or simply, HTTP PUT, which works through any firewall and simply requires 'curl' on the client side. Recursive is still a problem (but the unstandardised list format of FTP isn't that much better)
In my experience, this strongly correlates with "...and your password is l3tme1n." "Wait, isn't that insecure?" "Nah, it's fine, who would want to hack us anyway?"
In other words, anyone offering FTP as an integration path in 2017 is most likely wide open for a host of other security holes.
Some people only have FTP as a client capability. We support FTP, FTPS and SFTP, and we constantly push for SFTP with key-based authentication, but every so often FTP is really the only solution that the client is willing to pay for.
For most of these clients, "FTP support" takes the form of a configuration page in their data export wizard, with fields like "host", "username", "password" and "path". If there isn't an "HTTP" checkbox, then they don't have HTTP support.
If you download Caddy with the "http.webdav" plugin, you get a single binary that can do the regular FTP tasks, plus autoconfigures SSL with Let's Encrypt :)
So if rsync is only for Linux then how does everyone else do any work? On Linux it is the preferred option, there is delete mode, ability to ignore folders and differential compression. It just works. Plus you don't have to move a mouse around.
The other common way for me to upload or download files is with version control. Imagine if we integrated git into the browser and server to have something vastly more sensible.
Weird. I seem to have deployed scripts with rsync on Linux, Mac OS, Windows native, Windows Cygwin, Android and various unixlikes. Not available out of the box, granted. "Prohibited at all companies" is an absurdly wide claim, my actual showstopper when checking viability was UDP behind NAT.
FTP is still valid for larger files - mainly in the printing industry where job uploads could easily be multiple hundreds of MBs, some now use 3rd party document transfer sites but most still rely on FTP (usually with a password access)
You did not mention any advantage with using FTP over any other file transfer protocol. "Multiple hundreds of MBs" is not much by todays standards, and FTP is not especially suited to transferring large amounts of data. Literally any other file transfer protocol is better. Like sftp or https.
That's exactly it - the advantage is people don't want to change and it's taken them years to get used to whatever ftp client they've been given or trained on
Every competent FTP client I can think of can also handle SFTP and FTPS connections. With SFTP it's just a matter of telling the client that's what you're using so it can change to the different port; with FTPS you don't even have to do that.
It's true that people resist change, but the change they have to swallow here is so tiny and the upside so massive that it's darn close to malpractice to continue giving them the option to use plaintext FTP.
I once had trouble with a client who used his business data solution for uploading data to our server (we support FTP, FTPS and SFTP, and we recommend SFTP + keys). It was an old java-based application with no updates, no SFTP support, and all attempts at FTPS failed in the face of certificate issues (outdated list of CAs, no ability to force a certificate). The client had no understanding of the issue and the technician was an expert for this specific business application, with no ability (or willingness, maybe) to write any sort of script or workaround outside that application.
I think a lot of the problems we have with new protocols is that we simply can't add new ports. So many routers simply block anything that's not 80, 443, or 21.
Advantage is the Newspaper I help won't have to move from PageMaker to something else and still use Fetch for MacOS9 to do his transfers to the press. New transfer schemes might make things more complex for him to transfer files to say OSX then over to whatever file service - lots of complex steps (for him).
Insight: Small businesses don't like change when things still "just work" especially when changes only seem to add complexity and costs.
Did you not read the parent comment? Not everyone works for a tech-savvy Silicon Valley company where people just breezily swap out it tech every few months.
That hardly does the same thing. You get the directory listing files stored, which you don't want. You don't get file permissions transferred. There are issues with .dot files.
Worse, you have to secure your server to make sure .php files in someone's home directory don't accidentally get run.
By default a web server will run files - unless you specially configure it not to, FTP never does that. It's a LOT harder to secure it with HTTP. And if you have places where you want stuff run, and other places where you want files transferred? It's possible, but your likelihood of making a mistake in the configuration is very very high.
HTTP is really not suited for bulk file transfer. SFTP works though.
> By default a web server will run files - unless you specially configure it not to
You can't really run anything using default nginx configuration, and even after some tweaking, you still need some FastCGI/WSGI/whatever-compliant daemon to pair it with to be able to "run something". Same goes for `python -m http.server`, except you basically cannot make it execute anything at all.
I have definitely worked in this world - I feel like the right replacement for FTP for sending large assets around would be to host them in an S3 bucket, sooner than use web-based services where you add files to your account and share them out with somebody. I know Dropbox and Google Drive are commonly used for this, but isn't it more of a hassle than S3?
I don't see why FTP is any more valid than HTTP for this. Uploading from the browser directly to S3 isn't difficult. You can support resume and whatever auth strategy you want as your backend generates the signing key.
We do this (well, with Azure Blob Storage instead of S3, but it's the same). It works because we can have our customer execute[#] some code that authenticates them, queries our server for the signature, then slices up the file and uploads it, then notifies our server when it's done.
[#] including JavaScript in the page.
Obviously, turning that into an automated process that runs from a Windows Scheduled Task (or worse) is not as easy.
~10 years ago, a huge client came to our huge company to ask for some sort of stuff to manage synchronization over FTP. I was stumped. Very sensitive data. Not even rsync ..
Nope (both are capable of saturating a line). Also, does it protect the data in transit? Nope (HTTPS does, rarely do I see plain HTTP any more). Does it have resume capabilities? Nope. Does it play well with NAT or proxies? Nope. Does it mangle data in transfer? Yes.
The only thing that keeps FTP around is its ubiquity.
> The argument field represents the server marker at which
> file transfer is to be restarted. This command does not
> cause file transfer but skips over the file to the specified
> data checkpoint. This command shall be immediately followed
> by the appropriate FTP service command which shall cause
> file transfer to resume.
And it would seem the definition from the RFC isn't even what's commonly implemented anymore, per https://cr.yp.to/ftp/retr.html (ctrl/cmd+f "The REST verb" no quotes)
Finally, if we're including HTTPS and comparing HTTP v FTP, why not include FTPS? https://en.wikipedia.org/wiki/FTPS (edit: probably because Chrome doesn't support it. Heh.)
I remember the collective joy when ncftp [1] turned up as an alternative to the default ftp client, suddenly making it possible to resume FTP transfers. An important feature when you've initiated a large (several MB) file with a dialup modem but someone in the house needs to make a phone call.
IIRC FTP clients throughout the '90s were actually more reliable than browsers at resuming downloads.
It's much harder to transfer entire directories with HTTP.
You would need SFTP at least. But at least the setups that I've seen SFTP wants you to be a user on the remote server, while FTP is easier to have FTP only users (who are not actual users, they are just logins that are valid for FTP).
It might be like shooting with cannon into sparrows, but FreeIPA can define policies, where you can say which service on which host or hostgroup is allowed to authenticate which user (or group).
So you can have an user, which is defacto "on the remote server", but the only thing he can do is to login into specified sftp service.
For smaller scales, it is surely possible to do something similar, without FreeIPA, only with PAM.
Faster to develop, maybe. Every time I try to do bulk uploads of files over HTTP I deal with APIs that are aggressively developer-hostile at both ends - bad error messages, painful workflow, massive amounts of boilerplate, and missing features that you take for granted in an FTP client.
Just because the technology is physically capable of doing it doesn't mean it's at all usable.
Articles mention the added complexity of FTP, having both control and data socket, randomized port, connection latency (N round trips vs 1 in HTTP IIRC)..
That said HTTP has an aura of atomic resource download, while FTP comes with :drumroll: a file system access aura. I barely never upload anything in shell through HTTP. And before the era of HTML drag n drop, it wasn't in my mind to upload through a browser over HTTP either. That changed a bit though. Still my favorite thing to upload over the web is webtorrent.
I believe it's higher. It's especially visible if transferring multiple smaller files, as each download requires a separate new connection (no data connection reuse).
I'm not sure why WebDAV is less popular. It has its oddities and implementation issues but it's nowhere close to FTP weirdness. And AFAIK WebDAV support is either built-in or readily available to install in all mainstream OSes.
Delta transfers? Multiple freshness checking methods? Multiple transport protocols? Built in checksumming? This is not your grandfather's rsync, mind you - the protocol is at version 30, IIRC. (As opposed to FTP which has made zero progress in the last two decades) Now, I'm not pretending it's a universal tool (there are far more efficient protocols for e.g. file distribution), but at one way synchronization tasks, it excels.