Hacker News new | past | comments | ask | show | jobs | submit login
CloudFlare, We Have a Problem (cryto.net)
297 points by signa11 on July 16, 2016 | hide | past | web | favorite | 163 comments



> you can't protect the rest of your infrastructure (mailservers, chat servers, gameservers, and so on)

That leads to my technique in discovering origin servers when pen testing CloudFlare customers: brute force all the DNS names and record types, map out all the blocks, scan them for open ports, ID the web server ports, attempt to find the vhosts on those ports in requests with the hosts header

You'll almost always find the origin web server (sans protection) and also dev/staging instances of apps.


There is an easier way, send a DMCA request, Cloudflare been bleeding ips of their customers in that manner for years


Assuming you don't mind committing perjury, I guess.


Is there actually a recorded instance of someone getting charged for perjury relating to a DMCA request?


I don't know what happened with this case: https://torrentfreak.com/warner-bros-our-false-dmca-takedown...

The law is pretty clearly broken. I kind of hope Google is collecting all the false take down requests they get.

Submitting a false report, under penalty of perjury, and saying "whoops algorithms lol" shouldn't be something they get away with.

There's this too: https://www.eff.org/press/archives/2004/10/15


Only one part of a DMCA takedown is under penalty of perjury, and it's quite possible to file a knowingly false takedown request without that part being falsified.


But isn't it the part where you assert in good faith you own the copyright? How do you get around that scenario of a malicious takedown request against a random site?


It's the part where you asset that you are the owner of the copyright you alleged is being violated. As long as you own copyright on something that's not a problem. The part where you actually allege that particular content uses your work without permission or privilege is not under penalty of perjury.


Yes because hackers who go around DDOSing entities care about perjury


To be fair they were talking about pentesters, for whom port scanning is usually on the table, and breaking the law is not.


Under Computer Fraud and Abuse Act (18 U.S.C. 1030) it is a federal crime to "intentionally access a computer without authorization or exceed authorized access" ...

An eager prosecutor could take that and run a mile


Yep, except for the small site that only ever set up A/MX and are using hosted email.

Sometimes in those cases DNS history is enough.


Thanks - forgot to mention dns history, and a domaintools subscription is worth it


Or just http://ViewDNS.info/ for a free alternative :)


Heh, 99$/month, not worth it for me as a lowly scrub.

Cool to see what you're doing though, wish I had a sub myself.


This only works if there are any DNS names that aren't proxied... we don't have any exposed so there would be nothing for you to find.


Of course, but that's his point.

Most of the time people don't go through the effort of putting their mailservers/nameservers/etc. behind a proxy.

I'm pretty comfortable guessing that 95% of CPanel/Plesk users that use cloudflare(or another CDN) _and_ host their own mail/name-servers don't put the latter behind a proxy; and they often are on the same box as the webserver.

Edit: Which is to say that this doesn't effect someone doing it 'right', but almost everyone is sloppy(most people just don't care as they're not actively being DDOS'd).

In reality even using this to find the webserver they will eventually get wise to how you're finding the IP(likely) and swap to a new one(depending on their hosting situation) this time putting all other DNS resources 'behind proxies'.


> Most of the time people don't go through the effort

If people don't put in the effort for the security they need, then they won't have that security. This applies to any concept and I dont see how this has anything to do with a single vendor who just provides the tools and service.


Because the whole selling point of CloudFlare is that the customer supposedly doesn't need to invest effort into security, because CloudFlare will handle it all for them.

Which is obviously not the case, but that's what the marketing says.


They do handle a lot, doesn't mean you're not responsible for the settings you choose. Lack of understanding or effort on your part doesn't mean you get to just blame the vendor.


This is almost literally how they are marketing their product: https://www.cloudflare.com/overview/

Either you invest effort into security anyway and you don't need CloudFlare, or you don't invest effort into security and CloudFlare won't save you either. In neither case is CloudFlare the solution.


Or the logical way to think about this is that CloudFlare is another vendor that you can use (amongst many) to create the security you need with the trade-offs that are acceptable.

Marketing does not absolve you from proper configuration... clearly you have it out for this company for some reason.


Yes, I personally wasn't trying to convey anti-CloudFlare sentiment, I even said "and other CDN providers"

I don't think this is a 'CloudFlare vuln' or the responsibility of CloudFlare to resolve etc...

I maintain that it is sloppy work that leaks the underlying webserver IP, but also that few people care about doing so.


I'm a bit disappointed that for all the "oh no they break TLS, oh securitay" in the article the author ends up recommending using Caddy. It's written in Go using Go's TLS implementation, which is only a partial implementation of TLS 1.2, of which the authors themselves have said it's not been thoroughly reviewed, there's a few known attacks against it and it shouldn't be used for things exposed to the world wide west.


Out of curiosity do you have examples of known attacks against the Go TLS stack?

I am going to go have a look myself but would appreciate the head start.

Edit: I found these but they don't seem super terrible and are fixed in the versions of Go most people are using: https://www.cvedetails.com/vulnerability-list/vendor_id-1418...


No, I do not. Unfortunately that's no guarantee they're not around though :(.

All the data on this is extremely old and no one seems to recently have done a deep-dive into Go's TLS stack. I really hope someone will (or that Google will fund the research themselves). It would be beneficial to the ecosystem to have a thoroughly reviewed implementation and a clear understanding of what the state is.

Right now all I can go on is a statement of the author about 3 years ago, around the time of Go 1.2:

Cryptography is notoriously easy to botch in subtle and surprising ways and I’m only human. I don’t feel that I can warrant that Go’s TLS code is flawless and I wouldn’t want to misrepresent it.

There are a couple of places where the code is known to have side-channel issues: the RSA code is blinded but not constant time, elliptic curves other than P-224 are not constant time and the Lucky13 attack might work. I hope to address the latter two in the Go 1.2 timeframe with a constant-time P-256 implementation and AES-GCM.

Nobody has stepped forward to do a review of the TLS stack however and I’ve not investigated whether we could get Matasano or the like to do it. That depends on whether Google wishes to fund it.

https://blog.golang.org/a-conversation-with-the-go-team

I've also had a discussion with one of the Caddy developers who recommended for production usage to front it with something that does TLS for you, precisely because no one really seems to know the state of TLS in Go. Arguably other TLS implementations have other issues but there's something to be said for "the devil you know".


I would generally expect Go's TLS 1.2 defect rate to be competitive with those of other mainstream TLS implementations. That code is very well regarded and designed by domain experts.

I'm one of the founders of Matasano, and started the crypto practice within Matasano that would have done that Go TLS review, and I can say pretty confidently that compared to the attention Go TLS already gets from experts, the long-term benefit of us reviewing it as a formal project would have been marginal.


Currently being in that crypto practice, and having found the latest CVE on golang that affected their TLS stack (was found in the bignum package), I'm confident of the inverse.


Considering:

* Golang's TLS stack is far less complex in comparison to other projects.

* Golang's TLS stack is written in a "safe" language.

* Golang's TLS stack is written by individuals with lots of experience in SSL/TLS (and its flaws!).

* Contributions to the project are held to very high standards.

Why do you believe the inverse is true?


> Golang's TLS stack is far less complex in comparison to other projects.

TLS is the definition of complex =)

> Golang's TLS stack is written in a "safe" language.

Not all bugs are memory corruption bugs.

> Golang's TLS stack is written by individuals with lots of experience in SSL/TLS (and its flaws!) > Contributions to the project are held to very high standards

True, I would expect the code to be of high quality and the bugs to be sparse. But even knowing this, you always want to have other pair of eyes looking at your code. An audit done by other experts brings a lot to the table.

PS: also, I think an audit would cost a negligible cost to Google =)


I'm not saying having you look at the code would be a bad idea. I'm just pushing back on the notion that the code isn't "ready for prime time" until you do.


Anyone who writes crypto and does not give such a disclaimer is delusional. That makes me more confident, since it means the author respects the problem.


I don't find security FUD like this convincing. OpenSSL, which has more eyeballs on it than anything else, has had a rash of vulnerabilities.

Go is also a safer language than C, which makes whole classes of bugs far less likely.


I don't understand why Cloudflare is used by so many sites. I would guess that for 95% of it's users, it doesn't solve any real problem.

http://www.slashgeek.net/2016/06/07/cloudflare-making-intern...


They've got a very compelling free tier to get you roped in. Works great as a CDN, integrated SSL, great interface, DDoS protection / firewall, page rules - there are just a few of the useful features.

Is there a more comprehensive free tier anywhere else ?

P.S. I'm not saying they are the best choice. They are simply too convenient & comprehensive to get started. With a single click your site can "claim" to be HTTPS even though the upstream connection "may not" be encrypted.


Right, but this is the pretty much precisely the problem. It's just the Nth generation of "just centralize the internet through us and we'll take care of everything for you", but this time marketed at startups. All the usual problems with centralization still apply.

(It's still not really "DDoS protection", by the way. They just don't offer that on their free plan.)


The internet is already very centralized far beyond a single CDN company. All you need to do is look at the number of backbone providers and ISPs serving the vast majority of people in the world.

CloudFlare being a CDN has very little lock-in and is incredibly easy to move away from, so it speaks to how much value they deliver with their product that they have so many customers.


It only matters because they have no competitors, otherwise you'd say the same applies to AWS hosting the origin servers, and the databases of those very startups likely to use Cloudflare.

In the end, if there were true competitors, it probably wouldn't matter much, they would be just one popular service that handed your data like many others


They also have plenty of competitors, the CDN space has more companies than the ISP space so the internet is already more centralized at a much deeper level. This a silly claim by the OP.


DNS!


Amazon Route 53 is better.


It isn't free, it's more complex to set up and it uses some of its own terminology like 'Hosted Zones' that need to be understood. Just 3 ways in which it's not better.

It's probably better if you define the comparison in terms of flexibility/control etc etc, but simply stating it's 'better' doesn't really add anything to the conversation.


We host a big domain there and it's like five bucks a month. Of course we don't use the more expensive features just basic DNS and the API.


DDoS protection on a budget.

A typical hosting provider would shut down the port if they see any heavy ingress traffic. Or maybe won't shut it down if their network can tolerate it, but won't bother to filter either, so the server would be overwhelmed and down anyway. There are providers that help, but they cost extra.

CloudFlare, on the other hand, is $20/mo (or even free). Not much protection at this level, but it well-scares kiddies away with "uh, it's behind the CloudFlare", and if someone still hates you (and you haven't screwed up concealing where the real server is), you can upgrade the plan and still try to remain online.

Don't see any other value, except, maybe, for caching/CDN stuff. TLS is done wrong (as described in the OP article), DNS management is a toy (the old fart in me always prefers the zonefiles), minification should be done (and we do) at packaging phase - not runtime and WAF doesn't have any details about what it does (half of the rules are, basically, "enable the we-dont-tell-you-what-it-does rule and hope it won't break your site in a subtle manner").


The article mentions several alternatives that are less than $20/month for actual DDoS mitigation.


For me, it's so I don't have to deal with SSL. Sure, SSL isn't hard... but I have thousands of things more important to do than worry about where my keys are and when it expires. There's some nice bonus features like redirects, error pages, etc.

It takes 10 seconds to sign up, is cheap, makes something complicated very simple, and offers immediate value. Anyone building a developer tool should use CloudFlare as a case study.

EDIT: To the person who replied to me: Heroku, Shopify, and other similar services already serve their sites as HTTPS; Cloudflare just lets you throw HTTPS on a custom domain. It's HTTPS from top to bottom, don't worry.


If I understand you correctly your setup is exactly what's described in the article:

  your_server <--- no TLS ---> CloudFlare <--- TLS ---> Browser

If that's the case you are putting your users at risk, by pretending there is a TLS connection to your server when really it's only to CloudFlare. The connection from CloudFlare to your server is completely unprotected and is routed over the public internet.

That's not an acceptable setup. If I am communicating to a service that is TLS protected and I see the little green icon in the top left of the bar, I should be confident that my data that I am communicating to you is not routed over the public internet unprotected!

You say this:

> makes something complicated very simple

You made it simple by simply not doing it


Well, you've reduced the risk of someone MITMing your users connections. So now your risk is of someone MITMing CloudFlare's connection.

My impression is that this will rule out most coffee shop owners from tampering with the connection through their router. Even if state actors can still get at your content, this is still better than "no SSL whatsoever".

Honestly, though, for anything beyond "static HTML on some server somewhere", basic SSL conf. is a 30 minute one-time job...


> this is still better than "no SSL whatsoever"

This breaks the expectation that if a website is using HTTPS the connection is encrypted from source to destination. I'm not sure it's better as it's effectively giving the user a false sense of security.


I'm not sure this is the reputation of HTTPS: people have no idea what HTTPS means besides "the website is secure". It's your job, as a server admin, to choose how you deal with your infrastructure. If you choose to not use TLS between you and cloudflare, then you made a decision (that is fundamentally better than no TLS at all). If something happen, because Cloudflare, or because MITM between CF and you, then it is not on the user but on you.

FWIW a lot of infrastructure terminate TLS at the load balancer as well. HTTPS does not mean e2e encryption. HTTPS means you're securely talking to their infrastructure.


You may think it's fundamentally better then no TLS, and it may be on some levels, but where it's displayed to the user, it's seen as "This is HTTPS", with no mention of "it switches to HTTP for the last half of the trip". I don't want my credit card details and login info routing over the public internet in plaintext, but thanks to CF, I can't tell if they are or aren't. Oh sure, I won't get mitm'd by a coffee shop, but that "gain" is less then the loss of "oh, it's got the lock, that means it's secure"


But an infrastructure can make bad decisions at any point. They could terminate the tls connection at a wrong node, they could store your data unencrypted, they could... All of this is not on the user. It's on the company. And if they do decide to use Cloudflare this way it is their architecture decision.


Yes, that is all understood. The fact remains, however, that they are basically subverting what that lock means. It's ALL ON THE COMPANY, but I can't tell as a user that they have broken it, and in fact, my browser is SAYING it's secure. The company is deciding to make it lie. THAT IS A PROBLEM.


  your_server <--- no TLS ---> CloudFlare <--- TLS ---> Browser
So there's a MITM here -- Cloudflare.


I don't do this, but I believe you can use Cloudflare with self-signed certificates on your server.

Cloudflare essentially gives you free SAN certs at the edge nodes, which is something you typically have to pay for (to the tune of $$$$s) with other CDNs. Most CDNs give you the more typical SNI certs.

Also, as an aside, isn't it more likely for an end-user to get MtM'd than for that to happen to data centers?


There's no such thing as a SNI certificate. SNI is a TLS extension allowing clients to send the hostname as part of the Client Hello, which allows the server to pick the right certificate for the hostname being requested. Without SNI, you're essentially limited to one certificate per IP address (which is what makes things expensive for CDNs if you need to support clients without SNI).

CloudFlare's free plan uses SNI.


Their $20/month plan appears to offer SAN, so you can support clients without SNI.


Their free plan uses SAN certificates too. The difference is that they do "one (shared) SAN certificate per IP" for paid plans, as opposed to "multiple (shared) SAN certificates per IP" on the free plan, which requires SNI to work.


To your aside, MITM seems more likely to happen to the end-user, but it is possible to happen at the Cloudflare-->Website side. We already know that the NSA logs all traffic going over certain backbone routers[1], and that some ISPs are modifying non-HTTPS connections[2]. So if your traffic bounces through certain "bad" networks between Cloudflare and your site, who knows what happens.

Relatedly, Tor recommends all users use HTTPS[3], as otherwise the connection from the exit node to the target site is vulnerable to MITM. Given that there are malicious exit nodes[4], seems like a good idea. But, as the article brings to light, that still doesn't make any guarantees of safety.

[1] https://en.wikipedia.org/wiki/Room_641A [2] e.g. http://www.infoworld.com/article/2925839/net-neutrality/code... [3] https://www.torproject.org/docs/faq.html.en#CanExitNodesEave... [4] e.g. http://www.cs.kau.se/philwint/spoiled_onions/ but plenty of other sources


When it comes to security, the question of "whom is my attacker?" really determines how much invenstment you make in securing your service.

Many online providers don't care if the NSA is slurping up their data. For example, do you think that Macy's online is going to stick out their necks to protect clothing transactions from the government?

Nope.

Personally, I've seen end-users more worried about MtM close to their endpoint, such as their employers or schools scanning their traffic. They're not trying to evade governments here.


> If that's the case you are putting your users at risk, by pretending there is a TLS connection to your server when really it's only to CloudFlare. The connection from CloudFlare to your server is completely unprotected and is routed over the public internet.

How much of a risk is this in practice, assuming that when I do illegal things I'm very careful to only use sites that I know are end-to-end encrypted, and so for sites going through CloudFlare I'm only worried about criminals snooping for identity theft or credit card theft or similar purposes? [1]

I'd guess that criminals are overwhelmingly more likely to manage to intercept things near the browser side of things than near the server side of things. Probably most likely would be by taking over a public facing wifi router. I'd expect that getting in between CloudFlare and the server would take resources beyond most non-government actors.

[1] Note: I am NOT making an "if you aren't doing anything illegal, you don't need to worry about government snooping" argument. However, there is plenty of information that I consider confidential in the sense that I do not want it generally known, but where the government already knows that information. So, if some site asked me for such information and I needed to decide if they are providing adequate security I would only need to consider non-government snoopers because government snoopers already have the information.


Well, maybe the whole encryption / ssl system is broken if it's possible to setup the system like that?


What about using self-signed (and thereby untrusted) certificates?


Why do this when letsencrypt.org provides free trusted SSL certs?


There are missing edge cases with this.

Say I want to serve a static HTML website from a Google cloud storage over SSL. No way to do this for free with let's encrypt. 1 click and free with cloud flare.

Is there MITM chance between cloud flare and Google? Yes. For a static website with no login or input from users, seems safer than any free alternative I know of.


Doesn't Google offer a generic domain for you to use, like amazon has s3.amazon.com with some information about your username in the URL or subdomain?


Even if they do, how do you expect to get a valid (i.e. trusted) certificate for an FQDN that ends with ".google.com"?


No you host from your own domain.

https://cloud.google.com/storage/docs/hosting-static-website

Gives you a site at your name.com

Only missing link is the HTTPS which you get with cloud flare


I'd expect they'd use something like star-certs, and give you a heyuser.storagegcd.com


Forgive my ignorance, but what is a "star-cert"?


Oh I think I just used a bit of jargon there. It's canonically called a Wildcard Certificate [0]

[0] https://en.wikipedia.org/wiki/Wildcard_certificate


If CloudFlare pins the certificates then fine, but if it doesn't, you might as well leave the traffic unencrypted.


> That's not an acceptable setup. If I am communicating to a service that is TLS protected and I see the little green icon in the top left of the bar, I should be confident that my data that I am communicating to you is not routed over the public internet unprotected!

If you see that little green icon, you can be confident that the server you're communicating with has done the bare minimum to satisfy the browser gods today, so that the https everywhere people will stop yelling. It's never meant anything else. It's best to assume you're talking to a TLS terminator which communicates to the HTTP endpoint via AMPRnet (unencrypted IP via amateur radio).


The browser indicator is working properly according to the standard. That's what standards are - you don't get to complain about intent just because it doesn't mean to YOU what YOU WANT it to mean.

Traditionally certificates have been very expensive for little unpaid blogs to have, and difficult to setup. LetsEncrypt is a very new solution but it will be a long time until hosts support it; remembering most of us are on simple cPanel setups so until it's baked INTO cPanel and hosts have upgraded it's unusable for a majority of the web. Just because we can run a web site doesn't mean we can hack layers into web internals.

This is in part the fault of LE's idiotically short and crippling certificate lifespan. Lord knows why they did it but they knew it would screw us from the beginning. "Hey you should automate this, but seeing as you can't, no cert for you."

Google prefers https content. Cloudflare gives small people a better slice of those readers.

Ultimately nobody gives a shit that you're reading the semi-encrypted content I publish. NOBODY. Oh no you read how a database works? If that's illegal where you live then I think you're in much deeper survival problems than whether your browser button is magically misleading you (which it isn't; it's a guide, one that reveals itself once you click through, which is again exactly how it was designed).


How about because they serve hundred of terabytes of our images for $20/month? It has been well worth the money.


I am talking about free users.


They'll serve quite a lot of traffic for those as well. I've seen quite a few people recommend a combination of Amazon S3 and Cloudfront for hosting static sites.


Cloudfront is AWS's CDN, and isn't related to CloudFlare.


Typo. meant Cloudflare.


The same applies to free tier, I need 5x less server power thanks to cloudflare and I get it for free! Including other useful things.


That can be said of lots of stuff. Developers can be marketed to just like anyone else, and cargo culting is very common.

"You don't use CloudFlare? Are you enterprise?"

Amazon is also adept at this. Many of their database and storage services are great but EC2 is overpriced and they overcharge for bandwidth. I see loads of people who only use them for compute and who would be much better served by Digital Ocean or Vultr. But you're not a serious deployment if you don't over-provision on Amazon... and then put the whole thing behind CloudFlare.

Go to a dev or IT conference and you can see this marketing in full force. The style of it reminds me a lot of performance automotive marketing that you would see around a Nascar event.


It just gets in the way for users these days.

The periodic pre-load "checking your browser" delay, and the absolutely infuriating, and almost constant, captcha makes it a positively user hostile service.

I won't use the net from public wifi without VPN active, which now means cloudflare asks for that captcha for every site in its control, and re-asks every 30 minutes. A year ago it was rare for the captcha to pop up, now it's near permanent.

Most of the time now I'll just go elsewhere.


Free SSL.

Enough reasons for me.



These are posts that always repeat the same thing: don't use Flexible SSL. Fine, that's your choice as the site operator to use or not use that option. Why blame the provider for your configuration choice?


No, they don't. At least one of those posts is specifically about Full SSL, as is my article. You cannot proxy through CloudFlare without an MITM risk, by design. No matter how you configure it.


Just like every other CDN or network service out there that is a proxy. Using a load balancer from a cloud provider that does TLS termination is the same risk. Of course it's by design because that's just how it works. You can't proxy TLS traffic without being MITM.

Please stop repeating the same exact thing instead of actually understanding what you're talking about, it's just misinformation and FUD at this point. You still havent replied to any of the other questions.


Letsencrypt?


TBF that's only just getting started. Sites have been up for a while.


While true, StartCom and WoSign have been around for a while (even if they have some shadiness issues of their own).

I wouldn't necessarily hold it against somebody to have picked CloudFlare because of this in the past, but in 2016 it's simply no longer a reason to keep using CF.


They cloak my origin.


Tell me your domain, I'll tell you your origin in 5 seconds.


Explain?

The only ways I can think of finding the origin involve user error:

    1. DNS history lookup where they're still using the same IP address.
    2. Scanning the internet where they aren't ignoring non-Cloudflare 
       requests.
You can file an abuse or DMCA request to Cloudflare, but they only reveal your provider.


CloudFlare passes your origin.


How so?


If you are on free/pro (ddos protection starts on business+ iirc) they will just pass through your A record unproxied after attacked


All plans have protection, free has lower limits but still substantial. Pro has higher limits and Business is unlimited. Given the pricing, that's very fair.

They won't automatically unproxy you without notice. If you're getting attacked, you need to pay for the level of protection you need.


> They won't automatically unproxy you without notice.

Author here. They've done precisely that to me in the past. I don't have first-hand experience with their current policies (if they have changed at all), but still hear similar stories with some regularity.


The only thing I've seen these days is Cloudflare lifting your service to "Under Attack mode" accompanied with an email saying that they will suspend your account if you toggle it off during the attack.


Since this is a core part of their business, I don't see how they would risk it with this tactic. If you have evidence, let's see the details.



Looks like it wasn't until 2013 that Cloudflare became serious about origin protection and a more universal anti-DDoS service: https://blog.cloudflare.com/ddos-prevention-protecting-the-o... https://blog.cloudflare.com/the-ddos-that-almost-broke-the-i...

I'd like to know if people are still getting that email.



wrong direction. Reveals client IP to server, not server IP to client.


While I agree with many of the points made, his estimation of the number of round trips is way off. For a start, TLS negotiation requires several round trips before you even start speaking HTTP. Secondly, browsers (depending on vendor, version and number of domains) have a limit to the number of in flight requests. Thirdly, many pages load some assets via javascript execution, which adds another set of round trips.

As Cloudflare are very widely peered (I think they are now the most widely peered company), and as such are almost certainly closer to the end user than the origin server. This does really matter when making lots of round trips, which is in practice closer to inevitable (unless you have a small SPDY or HTTP/2 site, which is approximately no one).


> While I agree with many of the points made, his estimation of the number of round trips is way off.

It was admittedly a simplified equation, more for illustrative purposes than for argumentative purposes.

> For a start, TLS negotiation requires several round trips before you even start speaking HTTP.

While true, I'm using just-HTTP as a baseline. There are various techniques for reducing TLS roundtrip time, and it's so heavily dependent on the environment that it doesn't make for a practical baseline.

> Secondly, browsers (depending on vendor, version and number of domains) have a limit to the number of in flight requests.

Correct. But this is typically solved by bundling assets (on the server side) or increasing that amount (on the client side). On a well-designed site, this should not pose issues.

> As Cloudflare are very widely peered (I think they are now the most widely peered company), and as such are almost certainly closer to the end user than the origin server.

The point is that the same applies to Anycast CDNs (which CloudFlare is not, really, it's a proxy), but without the privacy issues. CF isn't really a good solution to this.


I don't think anybody is arguing that Cloudflare provides unique features that can't be obtained otherwise. The point is that CF is VERY convenient to use compared to having to do everything you mentioned through different services and then even some more. With Cloudflare you essentially get distributed DNS, a very fast and universally peered CDN, SSL support, IPv6 support, DNSSEC support, HTTP/2 support, website optimizations like responsive images or JS packing, all essentially with one click and for free, and without having to change a line in your code. And if you hit a DDOS, you swipe your card and you are done. This is their USP.

Stating that "you can write code and/or do/configure/buy things so that in the end you can avoid using it" is true, but it's a hard sell for an average business. The only way to avoid the Cloudflare monoculture is that true competitors arise. As much as you can hate it, a reverse proxy seems like what people want for this, and Cloudflare has even developed a workaround for the trust issues (keyless SSL) that competitors could offer to non enterprise costumers. I think there's space in that market.


This is essentially arguing that "letting a centralized gatekeeper do all this is easier". While technically true, it also completely misses the point of the web and how it was designed - namely, to be decentralized and not require this.

The thing is that "ease of use" isn't the only metric that matters, even if it's the easiest metric to sell. More often than not - especially in more recent 'startup culture' - something being 'easier' just means that it's not doing it correctly, and that somebody somewhere is conveniently ignoring the tradeoffs.


No, what I am arguing is that the CF design of "glorified reverse proxy" is basically a very good product with strong market demand that faces close to zero competition.

Compare this to AWS. AWS also "runs" shitloads of the web today; but still, the "centralization" problem is less mentioned in the context of AWS because there is fierce competition (Google, and also OpenStack and all the OpenStack clouds from major vendors).

Reverse proxies are "centralized gatekeepers" but no more than a hosting provider is, and we accept those as normal (right?).

What if there were 5-6 big players in the "reverse proxy" market, plus a hundred of smaller offerings? Wouldn't that basically solve all the issues of those worried about the "open web"?


> There are various techniques for reducing TLS roundtrip time, and it's so heavily dependent on the environment that it doesn't make for a practical baseline.

One of the best techniques is using a CDN to get the TLS termination closer to the user. This is done by every major company. There is no practical alternative if you serve a global audience.

> But this is typically solved by bundling assets (on the server side) or increasing that amount (on the client side). On a well-designed site, this should not pose issues.

It's not just the number of requests but the latency involved. Having a request done in 20ms from a local cache vs 500ms from a far away server makes a massive difference in web performance. You are just making up these scenarios where there are countless discussions, tests, studies, reports and conferences that talk about best practices for performance which is exactly the opposite of everything you state.

> The point is that the same applies to Anycast CDNs (which CloudFlare is not, really, it's a proxy), but without the privacy issues. CF isn't really a good solution to this.

Every major CDN is an anycast reverse proxy today, name one that isnt. I dont think you understand what a "proxy" is.


> One of the best techniques is using a CDN to get the TLS termination closer to the user. This is done by every major company. There is no practical alternative if you serve a global audience.

"Best"? Absolutely not. Unless you control the TLS terminators (which is the case for major companies), this completely breaks the security that TLS is supposed to provide. Fastest? Perhaps.

> It's not just the number of requests but the latency involved. Having a request done in 20ms from a local cache vs 500ms from a far away server makes a massive difference in web performance. You are just making up these scenarios where there are countless discussions, tests, studies, reports and conferences that talk about best practices for performance which is exactly the opposite of everything you state.

Those same tests show that the problem is generally not in the network latency, and that the optimizations needed to get performance to an acceptable level are usually of an entirely different nature - compressing images, bundling assets, loading order, not loading megabytes of JS at once, using CSS for styling rather than JS, and so on.

Note that I am talking about acceptable here, not optimal (from a performance perspective). Yes, you may be able to optimize performance beyond what you need, but at what cost? And is this really worth it if you've already hit the 'acceptable' point anyway? What's the point?

> Every major CDN is an anycast reverse proxy today, name one that isnt. I dont think you understand what a "proxy" is.

I'm not talking about something being technically classifiable as a "reverse proxy". I'm talking about things that function as a reverse proxy for a site.

Many CDNs offer plans where the only thing they are caching are the static assets, and they need to be explicitly referenced through a CDN URL. Some CDNs - in particular those handling video - even just let the customer upload the content to their infrastructure, and then proxy to their own backend storage, sometimes with transcoding services included.

CloudFlare works differently, and proxies the actual requests on the main domain of the site itself, caching some of the requests but not others. This has fundamentally different implications, because now suddenly dynamic data is proxied without any caching whatsoever, personal data is going through them, and so on.

Yes, both technically use reverse proxying technology in their stack, but in the end they are completely different approaches.


> Unless you control the TLS terminators

Nope, this is all about trusting vendors. Unless you're building your own computer chips and laying your own fiber, you're just blaming a vendor for what seems like personal issues.

> problem is generally not in the network latency

Again years of research has shown this is one of the biggest issues on the web. Read https://www.igvita.com/ for some good info.

> Yes, both technically use reverse proxying technology in their stack, but in the end they are completely different approaches.

Nope, this clearly shows you don't have any understanding. CF is a CDN. That's it. Instead of like other CDNs that offer a CNAME that you point your DNS to, they just offer a free DNS service that automates the CNAME entry into a single button.

All CDNs today are reverse proxies that will either return a cached response or proxy the request to the origin if the cache is empty. You can also use every CDN to proxy dynamic requests and many do. And yes, there are a few that offer a push model where you upload assets explicitly but this is the exception, not the standard.

You're talking about CF as if they're something unique when they do exactly the same thing as everyone else. Use them on a subdomain if you want and leave the main domain unproxied, use them only for static files or for dynamic requests, use them to proxy everything or cache everything including HTML. It's all just configuration. Why is this such a problem for you?


> This may not sound that bad - after all, they're just a service provider, right? - but let's put this in context for a moment. Currently, CloudFlare essentially controls 11% of the 10k biggest websites, over 8% of the 100k biggest websites (source), and almost 5% of sites on the entire web (source). According to their own numbers from 2012(!), they had more traffic than several of the most popular sites and services on earth combined, and almost half the traffic of Facebook. It has only grown since. And unlike every other backbone provider and mitigation provider, they can read your traffic in plaintext, TLS or not.

https://www.datanyze.com/market-share/cdn/

Amazon and Akamai are both larger providers than Cloudflare. Akamai also can function similarly to the criticism leveled against Cloudflare (i.e. No TLS to the edge, so it can be meddled with.)

Tbh, I'd be more worried about Amazon's position on that pie than Cloudflare since its comparable to Google's.

https://www.comscore.com/Insights/Rankings/comScore-Releases...

Its rarely healthy for any market to have a majority owned by a single player, even if Tech tends to generate winner-take-most situations in the marketplace.


And let's not forget their laissez-fair approach to abuse reports, which they generally answer with

> We are a reverse proxy, we are not responsible

disregarding any evidence that spamming/DOS/malware/phishing operations are protected by their rproxy services (essentially hiding the actual hoster, which prevents sending abuse reports there) and enabled by their providing authoritative DNS and TLS certificates.

They do this pretending anything that comes from their network is not their responsibility, while at the same time giving tor users a hard time.

Thanks CloudFlare!


They do not target Tor users directly (although since Tor is a country code there is an off-by-default option to block/annoy all Tor users), they merely target any IP that is associated with malicious traffic, which is what many Tor exit nodes are.

(I do agree that they should make it easier to send abuse reports to the real owner, however)


One reason I know CloudFlare is increasingly used is that I am too often welcomed with a CloudFlare page when I expected to see the site. I am not convinced this is good UX.


It's horrific UX.

I use a VPN and I constantly get hit with the cloudflare page and a lot of the sites I see it on are small, amateur sites or personal sites that just need a Wordpress page.

They don't need Cloudflare.

Getting bloody sick of this new centralization of the web. It's a worrying new trend that the very nature of how the Internet works is being reshaped and controlled by a handful of entities.


Here's a fun game for the family: Search for "cheesecake" in your search engine of choice and then count how many links you have to check before you find a webpage which does not connect to one of: Google, CloudFlare, Akamai, Amazon, Facebook, Twitter

For me, it was 34 links. And that 34th link was the Wikipedia-article for cheesecake...


^ My point exactly.

There are a handful of entities that now control the flow of the Internet's information and that worries me deeply.


Often, that description is exactly the kind of site that needs Cloudflare - the blog or brochureware site built by someone with no devops or Wordpress abilities in general - who ends up with a Wordpress site that maxes out at 0.5req/sec. Which then gets a Facebook advertising campaign.


Putting the free version of Cloudflare in front of a Wordpress site is the greatest thing. I used to spent a lot of time hunting down performance/stability issues with a VPS I built, now it's almost never a problem.


On the other hand, how often do pages that suddenly become popular get hugged to death, because they are just a small Wordpress page on a VPS or shared hosting somewhere? Throwing a CDN in front to catch these traffic spikes can do a lot.


Cloudflare has multiple uses. You may want it for their dos protection for example. In that case it doesn't matter if it's small WordPress site or something else.


Previous discussion, with comments from the author:

https://news.ycombinator.com/item?id=12096321


"... breaks the trust model of SSL/TLS,"

Certainly some of the encryption one can get via SSL/TLS is worth something. (But then one could use that encryption outside of TLS, too.)

And maybe some elements of the protocol are worth something.

But on the open internet is the "trust model" really worth anything?

It is so ridiculously easy to subvert. Cloudflare does it on a mass scale.

But one does not need to be Cloudflare to do it. The "incovenience" of subverting SSL/TLS is minimal.

Any website who is delegating their DNS to some third party is potentially vulnerable not to mention any user who is delegating their DNS lookups to a third party. Those are very large numbers.

Note I said open internet. I am not referring to internal networks.

Also - Question for the author: Was the archiving of dnshistory.org successful? Did they recently shut down and use Cloudflare to block ArchiveTeam?


Not sure I understand what you are saying. If you are saying that "Any website who is delegating their DNS to some third party is potentially vulnerable" to subverting SSL/TLS, then you are absolutely wrong. Malicious DNS can help the attacker to insert her servers between the user and the web service the user is trying to access, but it doesn't subvert TLS/SSL man-in-the-middle protection in any way.


Malicious DNS can request cert for the domain via e.g. let's encrypt, then it can do whatever it wants.


My understanding is that it doesn't apply at least to EV certificates. Also, the parent says that "any user who is delegating their DNS lookups to a third party", but that can't apply to such users either.


> But on the open internet is the "trust model" really worth anything?

It is. I'd be the first to admit that the CA model is absolutely not a solution that works well overall[1], but regardless of that, it's very hard to get away with a non-targeted attack on TLS (eg. by compromising a CA). Only targeted attacks are really viable, dragnet surveillance is not.

The problem with the way CloudFlare breaks the trust model, is that it's broken for everybody - not just high-risk individuals in a targeted attack, but every single person that talks to a site going through CF. It's completely viable to do dragnet surveillance or modification without anybody realizing it, and this makes it a much bigger breach than the CA model.

> Any website who is delegating their DNS to some third party is potentially vulnerable not to mention any user who is delegating their DNS lookups to a third party. Those are very large numbers.

Not without making a lot of noise. In the context of not having a good way to establish trust for previously unknown entities (Web-of-Trust doesn't really work there), the best we can do - at least, until we find a better solution - is making tampering as public and noisy as possible, so that it becomes risky for a malicious actor to carry out large-scale attacks.

Keep in mind that DNS requests are not directly done by clients, but rather through hierarchical caching resolvers - assuming that CAs used something like Google's DNS servers, an attacker on the DNS provider's network would have to spoof the DNS responses to Google, and as such have a very large portion of the internet end up on the wrong DNS record.

With the amount of DNS history services and security companies monitoring DNS discrepancies, it'd be pretty much impossible to get away with this quietly. Any attempt at subverting the verification process by changing DNS records would immediately show up everywhere.

> Also - Question for the author: Was the archiving of dnshistory.org successful? Did they recently shut down and use Cloudflare to block ArchiveTeam?

Unfortunately, our archival effort was interrupted by the operators of dnshistory.org enabling "I'm Under Attack" mode. We did not have enough time to implement the bypass before the service shut down (although it is what caused me to write the bypass code linked from the article).

I have to say it was a rather strange case anyway. We'd contacted them well in advance - multiple times, I believe - to ask about obtaining a copy of their data (which would mean we didn't have to scrape their servers), and they'd completely ignored the messages.

Only after we'd contacted them to ask about the block, did they reply with a biting message about "causing issues for other users on the site". Why they thought the impending shutdown and removal wouldn't cause issues for their users, I don't know.

[1]: http://cryto.net/~joepie91/blog/2015/05/01/on-mozillas-force...


I agree with the distinction you make between targeted and non-targeted. But I think being able to easily accomplish targeted attacks on SSL/TLS is a cause for concern -- and indeed that's what I'm thinking of. My thought is that it should not be possible for users to place such trust in something that is so easily subverted. As for DNS, I see no reason why one cannot encrypt DNS packets to prevent tampering. If users ignorantly want to use third party caches (which opens up more problems than just the one you mentioned), even when it's so easy to run a local cache, then we see arguments for another "trust model", e.g., DNSSEC, etc. Same problems.


CloudFlare is basically making much of the web unusable over Tor. That's my main beef against them.




From an admin standpoint, the SSL/MITM security issue is just huge.

From a user perspective, I just don't visit sites anymore that force me to solve a stupid captcha.

And I also hate the fact that I am additionally forced to submit to being tracked by Google (via the captcha).


Ohhh boy.

> Single-homed bandwidth can be gotten for $0.35/TB, DDoS mitigation services are plentiful and sometimes even provided by default, and the web is generally Fast Enough.

This only works when you're buying _a lot_ of bandwidth or you're buying cheap bandwidth (which usually has sub-standard routing). If you host your app servers on a standard cloud like AWS you're paying dollars per TB (but you're on a damn good network). DDoS mitigation services in many cases consist primarily of "we'll blackhole your IP if one happens". DDoS mitigation services that are affordable and leave your site running are costly.

The web is _maybe_ generally Fast Enough when you're lucky to be on an ISP and network connection that gives you a decent path to wherever your content is hosted but that's not a given, particularly these days when the majority of most services' users are mobile, users are increasingly geographically distributed and consumer ISPs are increasingly hostile towards service providers (e.g. if your transit was through Cogent, Comcast's Netflix dispute may have interfered).

> Essentially, there's not really a reason to use CloudFlare anymore, and the majority of sites won't see any real benefit from it at all. I'll go into the alternatives further down the article, but I want to address some of the problems that CloudFlare introduces first.

You haven't provided nearly enough evidence to backup this statement.

> Encryption

Yes, like any CDN, CloudFlare needs to have access to your content in order to cache it. Like any CDN, if the connection between the edge nodes and your own servers is not secure, a hostile ISP can do whatever it wants with it. This isn't CloudFlare specific or even CloudFlare universal. This here is a fault in your backend service. CloudFlare offers you the option to encrypt that traffic and you've chosen not to.

> In contrast, CloudFlare is just a reverse proxy with a very fast connection. Layer 3/4 attacks (those aimed at the underlying network infrastructure, rather than the application or protocol itself) will only ever reach up to the point where it's handled by a server rather than just passed through, and in a "reverse proxy"-type setup, that server is CloudFlare. They're not actually mitigating anything, it just so happens that they are the other side of the connection and thus "take the hit"!

So what you're saying is that a DDoS isn't hitting my servers and my users still get their content? That's called DDoS mitigation. Just because it doesn't work the way you're used to doesn't mean it's not working.

> Indeed it is essentially impossible to archive something that's in "I'm Under Attack" mode, despite that usually being the exact moment where archival is necessary!

Preventing automated systems from making requests to your site when you're in the middle of a DDoS seems sensible enough. If it's truly necessary (and permitted), contact the site and ask for the IP of the backend. If your work is appreciated, they'll give it to you. As a site operator, if you want to archive my site, I'd rather you contact me. I'll give you my backend IP and hell, might even give you rsync access or something. Archiving through a browser is the least desirable way to have my stuff archived.

> In most of the Western world, connectivity is pretty good. You can go from most places in the US to Europe and back - across the ocean! - in about 140 milliseconds. A commonly used metric in the web development industry is that your page and all your assets should be loaded in under 300 milliseconds.

I'm located in Silicon Valley and a ping to Germany takes 172ms, a ping to Canada takes 90, a ping to Amsterdam takes 154 and so on. A ping to San Jose where my nearest CloudFlare/Akamai/everything POP is located takes 14ms.

> Assuming you're declaring all the assets on your page directly, that would make it two roundtrips totalling about 280 milliseconds, since the assets can be retrieved in parallel.

This is incredibly optimistic. Open up the Network tab in Chrome's dev tools and open Amazon, Facebook, even a WordPress blog sometime. Hell, HN's front page barely loads that fast.

> CloudFlare can't cache the actual pageloads locally, because they are dynamic and different for everybody.

This depends entirely on the content of the page. Not all content is dynamic. Blogs and news sites for example are largely static. Further, CloudFlare can cache the static _parts_ of the page and send only the dynamic content: https://blog.cloudflare.com/cacheing-the-uncacheable-cloudfl...

> So why not just use a CDN? Using a CDN means you can still optimize your asset loading, but you don't have to forward all your pageloads through CloudFlare. Static assets are much less sensitive, from a privacy perspective.

CloudFlare is a CDN. Why not use CloudFlare as a CDN for your static assets? CloudFlare isn't making you turn it on for all your domains. You can totally turn CloudFlare on for static.mysite.com and leave mysite.com on your own server.

> And this is the problem with CloudFlare in general - you can't usually make things faster by routing connections through somewhere, because you're adding an extra location for the traffic to travel to, before reaching the origin server.

This is the same for every CDN and like every CDN, you're relying on the CDN's internal network to get somewhere faster than your own would, and for the CDN's cache to eliminate the need even to do the round trip. If the content is already in Asia, the CDN doesn't need to make the request back to the origin at all. That eliminates entire intercontinental round trips and that's massive.

> Unfortunately, all of these issues together mean that CloudFlare is essentially breaking the open web. Extreme centralization, breaking the trust model of SSL/TLS, a misguided IP blocking strategy, requiring specific technologies like JavaScript to be able to access sites, and so on. None of this benefits anybody but CloudFlare and its partners.

No, you have opinions biased by (valid but not universal) philosophies and concerns. These features are desired and beneficial to many people.

This is way too much text for me so I'll stop here.

TL;DR: This article mainly complains about things that are common to every CDN, while demonising CloudFlare specifically for unknown reasons. The rest is mainly complaints about Under Attack Mode.


> Like any CDN, if the connection between the edge nodes and your own servers is not secure, a hostile ISP can do whatever it wants with it.

Unless I'm mistaken, most regular small/medium sized users of CDN will use a 'plug n play' type CDN, where the CDN just pulls from the origin server via the public http, and in that scenario you can't really fake SSL if you didn't set it up on your server, and your users won't believe that they are browsing through https when on your site. Cloudflare changes this model and superficially tells the user they're using https, but then on the second link to cloudflare, it's unencrypted. Even worse, as we can see here and elsewhere, a lot of people explicitly sign up to cloudflare for SSL! That means most likely they didn't set up ssl on their server.

> I'll give you my backend IP and hell, might even give you rsync access or something. Archiving through a browser is the least desirable way to have my stuff archived.

Yeah but this is the most optimistic view of it all. If you are at all familiar with archiveteam and others, the main method for archiving web sites is through the public web site. For many reasons, site admins might not want to give access directly to their server, so the most atomic and simplest path is to simply crawl the web site, in order to 'get everything' (all the sites), as long as you don't flood the server with requests and such, which most don't do.

> No, you have opinions biased by (valid but not universal) philosophies and concerns. These features are desired and beneficial to many people.

So you don't have any worries about Cloudflare and the centralization? What about tor users right to privacy and how the capchas are completely insane? Cloudflare is unfortunately a huge pain in the ass and I'm not sure they can be trusted. There's no proof they are connected to any governments as far as i know, but they have now become this standard thing that everyone enables because it's free, and the surveillance possibilities are _vast_, even worse than cookies/advertising IMO because there is almost no way to circumvent it as a normal end user


> Cloudflare changes this model and superficially tells the user they're using https, but then on the second link to cloudflare, it's unencrypted. Even worse, as we can see here and elsewhere, a lot of people explicitly sign up to cloudflare for SSL! That means most likely they didn't set up ssl on their server.

This essentially pushes any MITM to CloudFlare's network, which is _usually_ better than the user's and so far has exactly one confirmed interception. This is a valid concern and could certainly be better but I believe eliminating the CloudFlare -> User vector from a potential attack is a good thing.

> Yeah but this is the most optimistic view of it all. If you are at all familiar with archiveteam and others, the main method for archiving web sites is through the public web site. For many reasons, site admins might not want to give access directly to their server, so the most atomic and simplest path is to simply crawl the web site, in order to 'get everything' (all the sites), as long as you don't flood the server with requests and such, which most don't do.

While I generally support archival efforts, making a large number of automated HTTP requests (you're archiving the entire site after all) while I'm in the middle of a DDoS is not appreciated, particularly if any of that content has to come from a database (because you're accessing old stuff that isn't in my site cache). This could make a barely tolerable DDoS completely take down my origin.

> So you don't have any worries about Cloudflare and the centralization? What about tor users right to privacy and how the capchas are completely insane? Cloudflare is unfortunately a huge pain in the ass and I'm not sure they can be trusted. There's no proof they are connected to any governments as far as i know, but they have now become this standard thing that everyone enables because it's free, and the surveillance possibilities are _vast_, even worse than cookies/advertising IMO because there is almost no way to circumvent it as a normal end user

Like I said, the philosophies and concerns have some merit but they're not universal. I have no issues with CloudFlare and "centralisation". If CloudFlare is shown to commit some kind of wrongdoing there's absolutely nothing stopping me from moving elsewhere.


> This only works when you're buying _a lot_ of bandwidth or you're buying cheap bandwidth (which usually has sub-standard routing). If you host your app servers on a standard cloud like AWS you're paying dollars per TB (but you're on a damn good network).

Sure. This is why I explicitly stated single-homed. For datacenter blend, you typically pay between $1 and $5 per TB, and most providers don't add much (if any) on top of that in terms of costs. That's considerably cheaper than what AWS charges, for comparable routing.

> DDoS mitigation services in many cases consist primarily of "we'll blackhole your IP if one happens". DDoS mitigation services that are affordable and leave your site running are costly.

That's just plain wrong. If nullrouting happens upon attack, then it was never a DDoS mitigation to begin with (and yes, I am aware that some sketchy hosts try to sell this as "DDoS protection"). None of the options I've listed fall into that category.

> The web is _maybe_ generally Fast Enough when you're lucky to be on an ISP and network connection that gives you a decent path to wherever your content is hosted but that's not a given, particularly these days when the majority of most services' users are mobile, users are increasingly geographically distributed and consumer ISPs are increasingly hostile towards service providers (e.g. if your transit was through Cogent, Comcast's Netflix dispute may have interfered).

I'm taking all of these into account in my assertions.

> You haven't provided nearly enough evidence to backup this statement.

I've addressed all the arguments I've commonly heard, and offered to cover any other usecases through e-mail or the comments. What more are you expecting? Magic?

> So what you're saying is that a DDoS isn't hitting my servers and my users still get their content? That's called DDoS mitigation. Just because it doesn't work the way you're used to doesn't mean it's not working.

Nope. The moment somebody locates your origin - and somebody will - you're boned, with no way to mitigate it whatsoever. DDoS mitigation can only be effective if it actually isolates the backend, which CloudFlare doesn't (and cannot) do.

> Preventing automated systems from making requests to your site when you're in the middle of a DDoS seems sensible enough.

It's not. You don't want to prevent automated traffic, you want to prevent attacker traffic. Completely different category.

> If it's truly necessary (and permitted), contact the site and ask for the IP of the backend. If your work is appreciated, they'll give it to you.

How do you propose a search engine spider does this, exactly?

> As a site operator, if you want to archive my site, I'd rather you contact me. I'll give you my backend IP and hell, might even give you rsync access or something.

We did, in the particular case I am referring to. The operator ignored our messages.

> Archiving through a browser is the least desirable way to have my stuff archived.

That's not true from an archival perspective, even if it is true from a resource usage perspective; you ideally want to archive all content like it is presented to the user, so that you can replicate it later (which is what eg. the Wayback Machine does, and why WARC is the canonical format for this).

> I'm located in Silicon Valley and a ping to Germany takes 172ms, a ping to Canada takes 90, a ping to Amsterdam takes 154 and so on. A ping to San Jose where my nearest CloudFlare/Akamai/everything POP is located takes 14ms.

Okay, and?

> This is incredibly optimistic. Open up the Network tab in Chrome's dev tools and open Amazon, Facebook, even a WordPress blog sometime. Hell, HN's front page barely loads that fast.

Have a look at what's causing the delays. Hint: it's usually not the static assets of the site itself.

> CloudFlare is a CDN. Why not use CloudFlare as a CDN for your static assets? CloudFlare isn't making you turn it on for all your domains. You can totally turn CloudFlare on for static.mysite.com and leave mysite.com on your own server.

Sure, possible. But who actually does this? And why would they, if it takes extra work? And why would you expect people to do that if the whole selling point of CloudFlare is not having to think about this kind of stuff?

How it can be used in theory doesn't really matter. The practical consequences do. And those are not pretty, at all. If CloudFlare wants to become less of a threat to the web, then perhaps they should take the first step in preventing the harmful ways of using it.

> This is the same for every CDN and like every CDN, you're relying on the CDN's internal network to get somewhere faster than your own would, and for the CDN's cache to eliminate the need even to do the round trip. If the content is already in Asia, the CDN doesn't need to make the request back to the origin at all. That eliminates entire intercontinental round trips and that's massive.

I am aware. That section is referring specifically to dynamic content, static content having already been covered by the bit on CDNs.

> No, you have opinions biased by (valid but not universal) philosophies and concerns. These features are desired and beneficial to many people.

These "philosophies and concerns" are core to the web and how it was designed to work, and it'd probably be a good idea to try and understand why they are as they are.

It's a cheap cop-out to just wave everything away as "well, that's just, like, your opinion, man" and pretend that that somehow makes all the real-world consequences go away. It doesn't.

And yes, I am aware that these features are beneficial to many people. That's why I provided alternatives that didn't have the same issues.


OP is a skid who used cloudflare as budget ddos protection to protect his site from other skids, his concerns don't reflect those of non-idiots. OP's "collective" has been owned many times, take his conclusions w/ salt.


In 2011, however, it was pretty much impossible to get working DDoS mitigation for less than $100 a month

I would have loved to know who provided this service so cheaply back then. IIRC in 2011, Prolexic and BlackLotus were your only options starting at $5k/mo, and you also had to be large enough to own an ASN because GRE was your only option.


I forgot the name, but it was some reseller of Awknet. They offered DDoS-mitigated hosting services on their own infrastructure. I think GigeNET also had a slightly more expensive offering, but I'm not sure whether that was ever publicly announced.


This whole post is ridiculous and comes off as some personal attack without much technical merit:

- Every big network company is at the mercy of government. Not sure what the point is here... so we should ban all big companies? Everyone from the ISP to the website host to the network equipment manufacturer can and might be compromised.

- Every CDN today is a reverse proxy and MITM is what they do. That's just how it fundamentally works. No magical way around this.

- CF supports websockets now in addition to HTTP(S) for every plan. If you need more protocol support than use a service specialized for that, CF clearly states that they don't focus on mail or game servers.

- Who cares if they do mitigation? What I want is my origin to be protected, that's it. If they soak it up with network capacity or have advanced processes doesn't matter to me.

- Free plans are free, so they have every right to kick you off if you consume too many resources and are getting DDOS'ed all the time. Pro plans also get plenty of protection, you have to be seriously under attack to have them contact you about it. And in that case, 200/month is probably one of the cheapest options considering most other hosts (like AWS) will be happy to bill you like crazy or just cant even handle it.

- The "under attack" option is supposed to pose problems, because you're under attack. It's pretty clear that it's not the normal mode of operation. Don't turn this on unless you really need it.

- Not sure what the issue is with having to whitelist bots with them. A whitelist approach is far better than trying to maintain an infinite blacklist. Also they are more advanced than simple IP filters, that approach stopped working a decade ago.

- Connectivity is not good, even in much of the western world, and varies widely between location, device, capacity, etc. Latency is a real physical limitation that can only be overcome by being closer to users. Try browsing a site in another continent that's not using a CDN and see what happens. Also CF is a CDN, not sure how "use a CDN" was an answer to this.

The only real criticism is their Flexible SSL option that doesn't encrypt the connection to the origin and this has been debated endlessly. I think their recent announcement of free origin certs are a way to improve this but ultimately it's a potential security risk and up to the website operator to understand.

We use CF because they provide DNS, CDN, SSL, free bandwidth, DDOS protection and better features than others for a single flat price. It works really well for us but it's about understanding how it really works and the trade-offs. If this doesn't work for you and your security or business needs, then use something else.


> - Every big network company is at the mercy of government. Not sure what the point is here... so we should ban all big companies? Everyone from the ISP to the website host to the network equipment manufacturer can and might be compromised.

Covered in the article.

> - Every CDN today is a reverse proxy and MITM is what they do. That's just how it fundamentally works. No magical way around this.

Nope. Covered in the article.

> - CF supports websockets now in addition to HTTP(S) for every plan. If you need more protocol support than use a service specialized for that, CF clearly states that they don't focus on mail or game servers.

How does them stating this make it not a problem?

> - Who cares if they do mitigation? What I want is my origin to be protected, that's it. If they soak it up with network capacity or have advanced processes doesn't matter to me.

But your origin isn't protected, that's the point. Only their servers are.

> - Free plans are free, so they have every right to kick you off if you consume too many resources and are getting DDOS'ed all the time. Pro plans also get plenty of protection, you have to be seriously under attack to have them contact you about it. And in that case, 200/month is probably one of the cheapest options considering most other hosts (like AWS) will be happy to bill you like crazy or just cant even handle it.

You're comparing to mitigation-less providers. Compare to providers that offer mitigation instead. Apples and oranges.

> - The "under attack" option is supposed to pose problems, because you're under attack. It's pretty clear that it's not the normal mode of operation. Don't turn this on unless you really need it.

It only poses problems for legitimate users, not the attacker(s). Covered in the article.

> - Not sure what the issue is with having to whitelist bots with them. A whitelist approach is far better than trying to maintain an infinite blacklist. Also they are more advanced than simple IP filters, that approach stopped working a decade ago.

Covered in the article.

> - Connectivity is not good, even in much of the western world, and varies widely between location, device, capacity, etc. Latency is a real physical limitation that can only be overcome by being closer to users. Try browsing a site in another continent that's not using a CDN and see what happens. Also CF is a CDN, not sure how "use a CDN" was an answer to this.

And CloudFlare doesn't actually make this better. Covered in the article. And no, CloudFlare is not a CDN - it's an Anycast proxy.

> The only real criticism is their Flexible SSL option that doesn't encrypt the connection to the origin and this has been debated endlessly. I think their recent announcement of free origin certs are a way to improve this but ultimately it's a potential security risk and up to the website operator to understand.

Still doesn't solve the problem, as covered in the article.

---

Did you actually read the article, or just skim it?


I read your article and replied to each major section. Nothing is "covered" as I've clearly stated the issues.

It seems like you fundamentally don't understand what a CDN is, how it works, how latency affects website performance, and have a strange idea of "mitigation" when in actuality most DDOS protection works exactly the same way. There's no difference between Fastly, CloudFront, MaxCDN or other companies doing the exact same thing, except that CloudFlare has a few unique features and you don't like them.

Here's a test: show me exactly how using Fastly in front of my webapp is different than using CloudFlare?


> I read your article and replied to each major section. Nothing is "covered" as I've clearly stated the issues.

I'll even quote the relevant sections for you.

> - Every big network company is at the mercy of government. Not sure what the point is here... so we should ban all big companies? Everyone from the ISP to the website host to the network equipment manufacturer can and might be compromised.

"And unlike every other backbone provider and mitigation provider, they can read your traffic in plaintext, TLS or not."

(Addendum: Compromising a server is much harder to do at dragnet scale than MITMing.)

> - Every CDN today is a reverse proxy and MITM is what they do. That's just how it fundamentally works. No magical way around this.

"Using a CDN means you can still optimize your asset loading, but you don't have to forward all your pageloads through CloudFlare. Static assets are much less sensitive, from a privacy perspective."

> - The "under attack" option is supposed to pose problems, because you're under attack. It's pretty clear that it's not the normal mode of operation. Don't turn this on unless you really need it.

"Oh, and about that "I'm Under Attack" mode that you get on the Free plan as well? Yeah, well, it doesn't work. But don't take my word for it - here's proof. That code will solve the 'challenge' that it presents to your browser, in a matter of milliseconds. Any attacker can trivially do this. And the challenge can't be made more difficult, because it would make it prohibitively expensive for mobile and embedded devices to use anything hosted at CloudFlare.

But while it doesn't stop attackers, it does stop legitimate users.

[...]

Some might argue that these kind of archival bots are precisely what CloudFlare is meant to protect against, but that's not really true. If that were the case, why would there be an offer to add ArchiveBot to the whitelist to begin with? Why would the Wayback Machine be on that very same whitelist?"

> - Not sure what the issue is with having to whitelist bots with them. A whitelist approach is far better than trying to maintain an infinite blacklist. Also they are more advanced than simple IP filters, that approach stopped working a decade ago.

"I've been told that ArchiveBot can be added to the internal whitelist that CloudFlare has, but this completely misses the point. Why do I or anybody else need to talk to a centralized gatekeeper to be able to access content on the web, especially if there might be any number of such gatekeepers? This kind of approach defeats the very point of the web and how it was designed!

And for a volunteer-run organization like ArchiveTeam, it's far more tricky to implement support for these "challenge schemes" than it is for a botnet operator, who stands to profit from it. That problem only becomes worse as more services start implementing these kind of schemes, and often it takes a while for people to notice that their requests are being blocked - sometimes losing important information in the process."

> - Connectivity is not good, even in much of the western world, and varies widely between location, device, capacity, etc. Latency is a real physical limitation that can only be overcome by being closer to users. Try browsing a site in another continent that's not using a CDN and see what happens. Also CF is a CDN, not sure how "use a CDN" was an answer to this.

"But perhaps you're also targeting users in regions with historically poor connectivity, such as large parts of Asia. Well, turns out that it doesn't really work there either - CloudFlare customers routinely report performance problems in these regions that are worse than they were before they switched to CloudFlare.

This is not really surprising, given the mess of peering agreements in Asia; using CloudFlare just means you're adding an additional hop to go through, which increases the risk of ending up on a strange and slow route.

And this is the problem with CloudFlare in general - you can't usually make things faster by routing connections through somewhere, because you're adding an extra location for the traffic to travel to, before reaching the origin server. There are some cases where these kind of techniques can make a real difference, but they are so rare that it's unreasonable to build a business model on it. Yet, that's precisely what CloudFlare has done."

> The only real criticism is their Flexible SSL option that doesn't encrypt the connection to the origin and this has been debated endlessly. I think their recent announcement of free origin certs are a way to improve this but ultimately it's a potential security risk and up to the website operator to understand.

"But let's pretend that CloudFlare realizes that Flexible SSL was a mistake, and removes the option. They'd then require TLS between CloudFlare servers and the origin server as well. While this solves the specific problem of other ISPs meddling with the connection, it leaves a bigger problem unsolved: the fact that CloudFlare itself acts as an MITM (man-in-the-middle). By the very definition of how their system works, they must decrypt and then re-encrypt all traffic, meaning they will always be able to see all the traffic on your site, no matter what you do."

--

So yes, it's all covered in the article. If you believe that something isn't fully addressed, or it somehow isn't accurate, or you don't understand how it relates to that - then ask concrete questions. Don't just throw your hands up in the air going "BUT IT DOESN'T COVER THAT!", when it clearly has.

> It seems like you fundamentally don't understand what a CDN is, how it works, how latency affects website performance, and have a strange idea of "mitigation" when in actuality most DDOS protection works exactly the same way.

No, it doesn't. From the article:

"Traditional DDoS mitigation services work by analyzing the packets coming in, spotting unusual patterns, and (temporarily) blocking the origin of that traffic. They never need to know what the traffic contains, they only need to care about the patterns in which it is received. This means that you can tunnel TLS-encrypted traffic through a DDoS mitigation service just fine, without the mitigation service ever seeing the plaintext traffic... and you're still protected."

> There's no difference between Fastly, CloudFront, MaxCDN or other companies doing the exact same thing, except that CloudFlare has a few unique features and you don't like them.

Again, straight from the article:

"While there are some newer providers that offer similar services to CloudFlare - and I consider them bad on exactly the same grounds - they run on a much smaller scale, and have much less impact."

> Here's a test: show me exactly how using Fastly in front of my webapp is different than using CloudFlare?

When did I ever claim it was? If it works the same, it's prone to the same issues. This is a complete strawman.


Nice read, I couldn't agree more, though a paragraph about their lack of responsibility when they proxy malicious crap via their network would have been appropriate too. So often have i send complaints where they are part of serving malicious content and they send you some copy paste reply 'we are not hosting the content' while they could easily do something about it, by for example stop proxying that crap.

And their HN posts annoy me too, now that is just my problem, but for some reason they almost always seem to get posted twice..


Looks like I'll have to actually get around to properly setting up SSL on my website after reading this, only used CloudFlare because I was lazy.


The plaintext TLS part reminded me of the "SSL added and removed here :v)" slide regarding google's infrastructure


I have to disagree on the CDN point made. We did extensive benchmarks several months ago and found CloudFlare to be the fastest or near the fastest in every metric (and is saving us $100's per month). It works fantastically well as a low-cost CDN, and yes CDN's have a lot of value to a lot of sites.

http://goldfirestudios.com/blog/142/Benchmarking-Top-CDN-Pro...


I use Cloudflare because I host my static website on Github. The flexible ssl mode is good enough because there's no user data being passed around only articles of mine.

I've used the full ssl mode on self hosted servers and can't see what the dilemma is besides you being paranoid that Cloudflare will tamper with data passing through them. Evidence?


> there's no user data being passed around only articles of mine

That's not quite true. The sensitive data that is getting passed around in this case isn't your articles, but who is reading them.

> besides you being paranoid that Cloudflare will tamper with data passing through them. Evidence?

Why would this require evidence? It's a threat, plain and simple. Threat modeling isn't based on evidence, it's based on assuming the worst-case realistic scenario, because overestimating is less harmful than underestimating.


GitHub pages does support SSL now, so you can use "Full" SSL mode (not "Full (strict)") with GitHub pages now. We do this for glowing-bear.org, which is just a bunch of static files too.


Then why would you use CloudFlare at all? You already have TLS.


We want to use a custom domain, but TLS with custom domains isn't possible with github pages. https://glowing-bear.github.io/glowing-bear/ isn't exactly nice to type.


>and can't see what the dilemma is besides you being paranoid that Cloudflare will tamper with data passing through them. Evidence?

Why would you sit and wait for something to go wrong, when you could close a potential security problem now?


Where can I get bandwidth for $0.35/TB?


Hurricane Electric: https://he.net/

Seems they're actually down to $0.32/TB now. At least, the price per TB has historically mirrored the per-mbps price (you have to pay for both, separately), so I'm assuming that it's $0.32/TB now as well.

Some VPS providers - I can't immediately recall which - will also charge about $0.50/TB without having to pay per mbps at all. That's usually a mix of HE and Cogent.


The advantage of Cloudflare is, indeed, not protection from bots speaking HTTP.

Its main advantage is that it protects you as a site operator from SYN floods, traffic reflection attacks (ohai NTP, DNSoverUDP) and similar attacks. Oh, and it also protects your server from idiots doing portscans.


it also protects your server from idiots doing portscans.

Why does your server need protection from that?


> Its main advantage is that it protects you as a site operator from SYN floods, traffic reflection attacks (ohai NTP, DNSoverUDP) and similar attacks.

It doesn't. It protects their servers, not yours. This is precisely why "CloudFlare resolvers" are a thing.


Completely agree. It actually made my site slower, killed many speed optimization I had implemented.


I almost never used cloudflare. But I really appreciate their blog, it's very a effective advertisement.


Cloudflare "make web properties faster and safer".

Such statements are amusing at best. Seems analogous to an advertisement of a chocolate drink claiming to turn morons into Einsteins.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: