Hacker News new | comments | show | ask | jobs | submit login
Chromium: FTP resources will be marked “Not Secure” (groups.google.com)
188 points by stablemap 9 months ago | hide | past | web | favorite | 123 comments

I liked to leave a file in the top level public ftp directory called "README" that contained just the line:

    README: No such file or directory.
Somebody once emailed me complaining they couldn't read that file. I told them to run "emacs README" because emacs could solve any problem. Never heard back!

He's probably still trying to quit emacs

M-x vim-keys-mode, then :q ?

In my humble opinion, i find the label 'not encrypted' better fitting than 'not secure'.

[ edit after replies ]

Agreed, perhaps i unrightfully assumed people now what 'encrypted' means. Growing up with computers i don't know any better and sometimes forget or can't properly imagine what layman know and don't know.

HTTPS is more than just "encrypted", it's also "authenticated". That is, plain-text HTTP and FTP is not only "Not Encrypted", but also "Not Authenticated"; saying "Not Secure" captures both properties at the same time.

"Authenticated" is debatable. Someone has spent a few minutes on a third-party website to enable the green checkbox. Whether that means they are who you think they are is another matter.

Yeah, it's authenticated but only to the degree that you know that they own the domain name, and there are no other guarantees that they are who they say they are. Secure is way too strong a word.

They're certainly authenticated as the domain.

They're not authenticated as anything else.

IMO, "Not Secure" means a lot more to the layperson than "Not Encrypted". They're trying to push HTTPS adoption by punishing websites that don't use it.

Sure "Not Encrypted" is more accurate, but not everybody is going to know what that means.

But "secure" is what the "s" in "https" actually stands for.

To the users they’re trying to target here, Not Secure makes a lot more sense than Not Encrypted I think.

Not Secure: <reason0>, <reason1>, <reasonN>

Good. I'm very much surprised that FTP still exists in this century.

HTTPS is not a file transfer protocol, nor is it a network service designed for transferring files. People don't realize this until they need to send several gigabytes/terabytes of files in different directories with different permissions owned by different users, and suddenly they need to implement an FTP server over HTTP. Not to mention server-to-server syncing.

FTP is still very useful, and is depended upon by tons of corporations for bulk transfers and sensitive transactions. It is trivial to add SSL to FTP, or use SFTP if your clients/servers support it.

Sort of disagree.. HTTP (and HTTPS by extension) have become the veritable Swiss Army knife of the internet. It's not just for hypertext anymore.

FTP is way nicer than any of the other services that can be used.

To send someone a file, I send them a link that they can easily use to download from the FTP site.


It also allows them to upload stuff to it too.

Except every ftp server I've used hasn't gone past a few hundred kb/s even though on both ends the connection can handle much more. I end up having to use lftp (pget -n #) to download files from them, since no UI has multiple connections implemented properly.

Any reasonable FTP server (e.g. vsftpd) will saturate ethernet. Maybe the ones you were using were rate limited?

Very common thing to do for public ftp sites.

They have. They just cost money.

Firefox Send[0] is a pretty easy way to do one-to-one file sharing. As a bonus the file is encrypted before being transferred and decrypted after download.

[0] https://send.firefox.com/

oh boy... you're really peddling a web service over a self hosted and completely self owned system?

has the ability to develop and work on technology dropped this low? this is a regression.

Creating and maintaining a self hosted and completely self owned system that's secure and functional is a fun project but it is hardly necessary for one-to-one file transfer.

So yes I'm "peddling" a free web service over a DIY approach, especially as securing a public server isn't quite as trivial as you make it sound. It does take knowledge and sustained effort.

It depends on what services you are exposing. If it's just SSH and HTTP, the risk and difficulties are really low if you follow a few simple rules. The bonus is that you can learn many interesting things in an experiential, personal way.

This kind of approach is quite common nowadays, not only because people have different priorities, but also because there are strong financial incentives to promote "products and services" (websites that help you do something). The rest of us is happily using self-hosted services wondering how anyone else is handing their own data on a plate to complete strangers.

It exists because HTTP is not a good replacement for it.

It's especially mediocre when you want to download/upload multiple files like a directory with many files inside of it.

A friend recently shared with me his file collection, through OwnCloud. It was such a pain to download. Each file is a separate link. I can't use wget to pull it, because of authentication. It had some cli tools, but it was buggy and couldn't make it work. And of course no ftp. Essentially it forced me to click every single file. I couldn't even verify the files downloaded correctly.

FTP is actually great for what it does and with clients like lftp it's fairly enjoyable to use. If you want to kill it, better make a worthy alternative.

Besides the lack of encryption and proper authentication in the base protocol, it also has a two ports requirement that make it troublesome with firewalls/NAT.

The two port system allows a separate, text based control protocol with the ability to transfer files between two remote systems (FXP). These features don't really matter anymore and the second one is a security nightmare.

A better alternative would be SFTP, which is based on SSH and is pre-installed on most UNIX-like system. The only problem is that web browsers don't support it natively.

> I can't use wget to pull it, because of authentication.

What are you talking about? Wget supports --user and --password options.

Owncloud/Nextcloud has WebDAV support as well, though that's not great either I guess :/

And I don't know about you, but I also don't know any WebDav client to go to, so it doesn't solve this issue.

Windows has one built into Explorer, and had for ages (since WinXP at least).

Hey let's integrate.

Okay want to just FTP the files over?

Sure thing

Data crosses the public internet in plaintext. Auditors unhappy.

Except maybe I've gpg-encrypted the files for the recipient before sending them. Users unhappy.

Is that more common than HTTPS, simpler, and a standard feature of FTP? I daresay not. In other words, you're making up a convoluted scenario and pretending it's still plain FTP, even though your toolchain got far more esoteric.

I wish GPG wasn't so esoteric of a tool, considering our email protocol situation.

It's esoteric because proper key management is a fulltime job at best.

And every attempt at doing key management better is met with hostility from many in the community (rightfully or otherwise) as they tend to sacrifice "absolute security" in the name of greater "usable security" (often by centralizing the trust system in some way).

"Proper key management is a full-time job" is such a surprisingly accurate description of the situation! <3

Aye, it's a constant battle of security versus usability. I'm on the centralizing boat, hoping that Keybase will prevail.

That's how banks send records to each other, iirc. Instead of changing the protocol, they just wrapped the data.

At least in Canada the last bank integration I worked on was FTPS (FTP over SSL). Files were otherwise in plaintext with account numbers and everything.

There were a few blog posts posted here from a payroll company not too far back. They explicitly kept using the term “SFTP” so I have some hope that at least a good portion of banks aren’t completely broken.

EDIT: http://engineering.gusto.com/how-ach-works-a-developer-persp...

I was misremembering a bit. They said “secure FTP” and then just said FTP thereafter.

Secure ftp may mean ftps, ftp over tls/ssl.

Yep, down below I already made that observation.

"I know, hence my edit. In my mind I read secure FTP -> SFTP, damn brain. But after rereading, I'm thinking they meant FTPS. "

SFTP is file transfer over ssh. The name is a misnomer.

I know, hence my edit. In my mind I read secure FTP -> SFTP, damn brain. But after rereading, I'm thinking they meant FTPS.

The problem with that scenario is that the recipient doesn't know if those are the right files.

To be fair, most places taht say "FTP" the files are at least running SFTP. It's still a bad solution for an API, but it's not quite that bad.

Why not just SFTP the files over?

My experience is that when people say FTP a lot of the times they can mean SFTP.

... or FTPS (which is different from SFTP) !

Different but still worse than SFTP:

FTPS suffers from all the same problems as FTP (lack of definitive standard for FTP commands nor their returns, poor client server relationship due to the data port being a callback, no standard for native compression, etc) bar the clear text problem* . But ironically the encryption then creates a new issue with clients behind NATing due to the aforementioned lack of client/server distinction.

I don't think there is really much excuse not to run SFTP instead of FTP(S) given that most clients already support it. Or use one of the numerous other sharing / transfer protocols available to us now.

* unless you misconfigure FTPS that is

FTPS lets you (easily) use entirely virtual (custom to the FTPS server) users.

SFTP requires the use 'real' users (maybe not with fancy daemon software or a headache of PAM rules).

...or virtual machines, containers, or even just defining valid groups in sshd_config. There is a whole multitude of ways of separating out SFTP users, it's really not a hard job to do.

Equally a lot of Linux FTP daemons defaults to using /etc/passwd users too.

So What you're describing is a configuration detail the administrator needs to be aware of regardless of what transfer server they set up. In that regard FTP is no different from SFTP (or vise versa)

> My experience is that when people say FTP a lot of the times they can mean SFTP.

It's worse than that: FTP has come to mean file transfer generically. Lots of users refer to Dropbox as "FTP".

Meanwhile, HPE refer to their own web based file upload software (for uploading memory dumps, logs etc) as "Dropbox".

It THE "protocol for file transfer", yes?


you must not work in enterprise space. so much FTP, its sickening..

Windows shares in my experience.

I'm not sure whether I would pick FTPS or SFTP as the superior protocol for automated file transfer (HTTPS is rather poor, although HTTPS-to-S3 is getting close).

On the one hand, SFTP has built-in support for key-based security (client authenticates with its own key, and checks the server's key), but the protocol itself is a mess that pretty much assumes the server is a UNIX box with a typical file system. We had the interesting challenge of writing a windows-based SFTP server that stores data on Azure Storage, and even the most surprising things (like one command to create an empty file, followed by another command to upload to that file) turn into a silly mess, not to mention the difficulty of setting up an SSH server on Windows.

FTPS gets those things right: security is based on SSL (more implementations available) and most file-altering operations are done with single commands. On the other hand, it's still password-based (although encrypted) and the data-channel-versus-control-channel problem becomes a mess with load balancing.

Why is HTTPS rather poor?

No support for directory listing, no support for "last modified date", no support for key-based authentication.

Of course, you can use WebDAV (but it's not HTTPs) or do kinda-FTP-over-HTTPS by listing directories in a specific format (but it's not a standard anymore).

Upload is too simplistic by default (input type=file), other solutions (html5) are somewhat problematic to implement. Also, recursive download requires some thinking.

There's also webdav, or simply, HTTP PUT, which works through any firewall and simply requires 'curl' on the client side. Recursive is still a problem (but the unstandardised list format of FTP isn't that much better)

I agree, so let ask

Hey should I use SFTP or FTP

Nah, FTP is fine

Okay, thanks

In my experience, this strongly correlates with "...and your password is l3tme1n." "Wait, isn't that insecure?" "Nah, it's fine, who would want to hack us anyway?"

In other words, anyone offering FTP as an integration path in 2017 is most likely wide open for a host of other security holes.

Some people only have FTP as a client capability. We support FTP, FTPS and SFTP, and we constantly push for SFTP with key-based authentication, but every so often FTP is really the only solution that the client is willing to pay for.

FTP, but no HTTP(S)? I have a hard time believing that.

For most of these clients, "FTP support" takes the form of a configuration page in their data export wizard, with fields like "host", "username", "password" and "path". If there isn't an "HTTP" checkbox, then they don't have HTTP support.

Maybe we need a generic HTTPS fileserver. It's just too easy to apt-get install an FTP server.

If you download Caddy with the "http.webdav" plugin, you get a single binary that can do the regular FTP tasks, plus autoconfigures SSL with Let's Encrypt :)

I really need to give caddy a try!

I find anonymous ftp to still be useful for distributing files to users. But for personal files? No.

Still vulnerable to MITM Attacks. Someone could replace any of the files with their own malicious version. Hope none of those files are executables.

With the alternatives of HTTP, HTTPS, rsync, Bittorrent and whatnot, what is the advantage of FTP? Even HTTP clients are more prevalent nowadays.

rsync => linux only

bittorent => not available out of the box and prohibited in all companies

https://www.itefix.net/cwrsync is just one rsync (client) for Windows, I'm sure there are more, including the Cygwin one

Both Facebook and Twitter use BitTorrent to deploy binaries onto production machines.

So if rsync is only for Linux then how does everyone else do any work? On Linux it is the preferred option, there is delete mode, ability to ignore folders and differential compression. It just works. Plus you don't have to move a mouse around.

The other common way for me to upload or download files is with version control. Imagine if we integrated git into the browser and server to have something vastly more sensible.

Weird. I seem to have deployed scripts with rsync on Linux, Mac OS, Windows native, Windows Cygwin, Android and various unixlikes. Not available out of the box, granted. "Prohibited at all companies" is an absurdly wide claim, my actual showstopper when checking viability was UDP behind NAT.

rsync is also available on macOS. I guess you mean "not Windows" (which is also incorrect, though it's not as ubiquitous as on Linux).

FTP is still valid for larger files - mainly in the printing industry where job uploads could easily be multiple hundreds of MBs, some now use 3rd party document transfer sites but most still rely on FTP (usually with a password access)

You did not mention any advantage with using FTP over any other file transfer protocol. "Multiple hundreds of MBs" is not much by todays standards, and FTP is not especially suited to transferring large amounts of data. Literally any other file transfer protocol is better. Like sftp or https.

That's exactly it - the advantage is people don't want to change and it's taken them years to get used to whatever ftp client they've been given or trained on

Every competent FTP client I can think of can also handle SFTP and FTPS connections. With SFTP it's just a matter of telling the client that's what you're using so it can change to the different port; with FTPS you don't even have to do that.

It's true that people resist change, but the change they have to swallow here is so tiny and the upside so massive that it's darn close to malpractice to continue giving them the option to use plaintext FTP.

I once had trouble with a client who used his business data solution for uploading data to our server (we support FTP, FTPS and SFTP, and we recommend SFTP + keys). It was an old java-based application with no updates, no SFTP support, and all attempts at FTPS failed in the face of certificate issues (outdated list of CAs, no ability to force a certificate). The client had no understanding of the issue and the technician was an expert for this specific business application, with no ability (or willingness, maybe) to write any sort of script or workaround outside that application.

I think a lot of the problems we have with new protocols is that we simply can't add new ports. So many routers simply block anything that's not 80, 443, or 21.

Advantage is the Newspaper I help won't have to move from PageMaker to something else and still use Fetch for MacOS9 to do his transfers to the press. New transfer schemes might make things more complex for him to transfer files to say OSX then over to whatever file service - lots of complex steps (for him).

Insight: Small businesses don't like change when things still "just work" especially when changes only seem to add complexity and costs.

General https is very inconvenient for uploading files.

The better alternative is WebDAV but that is one of the last successful Microsoft 'Embrace, extend, extinguish' efforts.

You can easily transfer a bunch of files in a batch with FTP. With HTTP it's not as easy.

Although SFTP works fine as well, but if you don't need security it's not any better, just equivalent.

> With HTTP it's not as easy.

Activate directory listing and `wget --recursive`.

Did you not read the parent comment? Not everyone works for a tech-savvy Silicon Valley company where people just breezily swap out it tech every few months.

Every few months? It seems weird, but HTTP and FTP are approximately the same age: 20-something.

That hardly does the same thing. You get the directory listing files stored, which you don't want. You don't get file permissions transferred. There are issues with .dot files.

Worse, you have to secure your server to make sure .php files in someone's home directory don't accidentally get run.

By default a web server will run files - unless you specially configure it not to, FTP never does that. It's a LOT harder to secure it with HTTP. And if you have places where you want stuff run, and other places where you want files transferred? It's possible, but your likelihood of making a mistake in the configuration is very very high.

HTTP is really not suited for bulk file transfer. SFTP works though.

> By default a web server will run files - unless you specially configure it not to

You can't really run anything using default nginx configuration, and even after some tweaking, you still need some FastCGI/WSGI/whatever-compliant daemon to pair it with to be able to "run something". Same goes for `python -m http.server`, except you basically cannot make it execute anything at all.

I have definitely worked in this world - I feel like the right replacement for FTP for sending large assets around would be to host them in an S3 bucket, sooner than use web-based services where you add files to your account and share them out with somebody. I know Dropbox and Google Drive are commonly used for this, but isn't it more of a hassle than S3?

I don't see why FTP is any more valid than HTTP for this. Uploading from the browser directly to S3 isn't difficult. You can support resume and whatever auth strategy you want as your backend generates the signing key.

We do this (well, with Azure Blob Storage instead of S3, but it's the same). It works because we can have our customer execute[#] some code that authenticates them, queries our server for the signature, then slices up the file and uploads it, then notifies our server when it's done.

[#] including JavaScript in the page.

Obviously, turning that into an automated process that runs from a Windows Scheduled Task (or worse) is not as easy.

~10 years ago, a huge client came to our huge company to ask for some sort of stuff to manage synchronization over FTP. I was stumped. Very sensitive data. Not even rsync ..

Is it faster than HTTP?

Nope (both are capable of saturating a line). Also, does it protect the data in transit? Nope (HTTPS does, rarely do I see plain HTTP any more). Does it have resume capabilities? Nope. Does it play well with NAT or proxies? Nope. Does it mangle data in transfer? Yes.

The only thing that keeps FTP around is its ubiquity.

Pedantic correction:

> Does it have resume capabilities? Nope.

See RFC 959,



> The argument field represents the server marker at which

> file transfer is to be restarted. This command does not

> cause file transfer but skips over the file to the specified

> data checkpoint. This command shall be immediately followed

> by the appropriate FTP service command which shall cause

> file transfer to resume.

And it would seem the definition from the RFC isn't even what's commonly implemented anymore, per https://cr.yp.to/ftp/retr.html (ctrl/cmd+f "The REST verb" no quotes)

Finally, if we're including HTTPS and comparing HTTP v FTP, why not include FTPS? https://en.wikipedia.org/wiki/FTPS (edit: probably because Chrome doesn't support it. Heh.)

I remember the collective joy when ncftp [1] turned up as an alternative to the default ftp client, suddenly making it possible to resume FTP transfers. An important feature when you've initiated a large (several MB) file with a dialup modem but someone in the house needs to make a phone call.

IIRC FTP clients throughout the '90s were actually more reliable than browsers at resuming downloads.

1. https://en.wikipedia.org/wiki/NcFTP

IIRC in the late 90's to early 2000's "download managers" that could reliably resume were fairly popular.

I've never had a good experience with browser resume: I generally copy the link and then use aria2/curl/wget to actually get the file

It's much harder to transfer entire directories with HTTP.

You would need SFTP at least. But at least the setups that I've seen SFTP wants you to be a user on the remote server, while FTP is easier to have FTP only users (who are not actual users, they are just logins that are valid for FTP).

It might be like shooting with cannon into sparrows, but FreeIPA can define policies, where you can say which service on which host or hostgroup is allowed to authenticate which user (or group).

So you can have an user, which is defacto "on the remote server", but the only thing he can do is to login into specified sftp service.

For smaller scales, it is surely possible to do something similar, without FreeIPA, only with PAM.

How are separate FTP accounts better than UNIX accounts with the shell set to scponly?

Faster to develop, maybe. Every time I try to do bulk uploads of files over HTTP I deal with APIs that are aggressively developer-hostile at both ends - bad error messages, painful workflow, massive amounts of boilerplate, and missing features that you take for granted in an FTP client.

Just because the technology is physically capable of doing it doesn't mean it's at all usable.

And rate-limiting !

I think the only advantage is a slightly lower overhead.

Articles mention the added complexity of FTP, having both control and data socket, randomized port, connection latency (N round trips vs 1 in HTTP IIRC)..

That said HTTP has an aura of atomic resource download, while FTP comes with :drumroll: a file system access aura. I barely never upload anything in shell through HTTP. And before the era of HTML drag n drop, it wasn't in my mind to upload through a browser over HTTP either. That changed a bit though. Still my favorite thing to upload over the web is webtorrent.

[1] https://whydoesitsuck.com/why-does-ftp-suck/

I believe it's higher. It's especially visible if transferring multiple smaller files, as each download requires a separate new connection (no data connection reuse).

I'm not sure why WebDAV is less popular. It has its oddities and implementation issues but it's nowhere close to FTP weirdness. And AFAIK WebDAV support is either built-in or readily available to install in all mainstream OSes.

This is unrelated but I would really love ipfs to be adopted for use cases like these.


Delta transfers? Multiple freshness checking methods? Multiple transport protocols? Built in checksumming? This is not your grandfather's rsync, mind you - the protocol is at version 30, IIRC. (As opposed to FTP which has made zero progress in the last two decades) Now, I'm not pretending it's a universal tool (there are far more efficient protocols for e.g. file distribution), but at one way synchronization tasks, it excels.

FTPS is quite slow on my gbit machine when uploading large files compared to just FTP. I think it's due to the handshake of SSL.

Next I want to see if FTP over OPENVPN is faster.

FTP nowadays is only useful for fetching resources from very old sites. Everyone I know has switched over to sftp and other ssh-based solutions.

Today, reading the comments, I have learned that a lot of hacker news readers do not understand FTP.

does chrome even support ftps?

It doesn't according to the 3rd comment in the link.

because it's a language for beginners

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact