
CloudFlare, We Have a Problem - signa11
http://cryto.net/~joepie91/blog/2016/07/14/cloudflare-we-have-a-problem/
======
nikcub
> you can't protect the rest of your infrastructure (mailservers, chat
> servers, gameservers, and so on)

That leads to my technique in discovering origin servers when pen testing
CloudFlare customers: brute force all the DNS names and record types, map out
all the blocks, scan them for open ports, ID the web server ports, attempt to
find the vhosts on those ports in requests with the hosts header

You'll almost always find the origin web server (sans protection) and also
dev/staging instances of apps.

~~~
gggggg11111
There is an easier way, send a DMCA request, Cloudflare been bleeding ips of
their customers in that manner for years

~~~
eli
Assuming you don't mind committing perjury, I guess.

~~~
Sir_Substance
Is there actually a recorded instance of someone getting charged for perjury
relating to a DMCA request?

~~~
DanBC
I don't know what happened with this case: [https://torrentfreak.com/warner-
bros-our-false-dmca-takedown...](https://torrentfreak.com/warner-bros-our-
false-dmca-takedowns-are-not-a-crime-131115/)

The law is pretty clearly broken. I kind of hope Google is collecting all the
false take down requests they get.

Submitting a false report, under penalty of perjury, and saying "whoops
algorithms lol" shouldn't be something they get away with.

There's this too:
[https://www.eff.org/press/archives/2004/10/15](https://www.eff.org/press/archives/2004/10/15)

------
daenney
I'm a bit disappointed that for all the "oh no they break TLS, oh securitay"
in the article the author ends up recommending using Caddy. It's written in Go
using Go's TLS implementation, which is only a partial implementation of TLS
1.2, of which the authors themselves have said it's not been thoroughly
reviewed, there's a few known attacks against it and it shouldn't be used for
things exposed to the world wide west.

~~~
jpgvm
Out of curiosity do you have examples of known attacks against the Go TLS
stack?

I am going to go have a look myself but would appreciate the head start.

Edit: I found these but they don't seem super terrible and are fixed in the
versions of Go most people are using:
[https://www.cvedetails.com/vulnerability-
list/vendor_id-1418...](https://www.cvedetails.com/vulnerability-
list/vendor_id-14185/Golang.html)

~~~
daenney
No, I do not. Unfortunately that's no guarantee they're not around though :(.

All the data on this is extremely old and no one seems to recently have done a
deep-dive into Go's TLS stack. I really hope someone will (or that Google will
fund the research themselves). It would be beneficial to the ecosystem to have
a thoroughly reviewed implementation and a clear understanding of what the
state is.

Right now all I can go on is a statement of the author about 3 years ago,
around the time of Go 1.2:

 _Cryptography is notoriously easy to botch in subtle and surprising ways and
I’m only human. I don’t feel that I can warrant that Go’s TLS code is flawless
and I wouldn’t want to misrepresent it.

There are a couple of places where the code is known to have side-channel
issues: the RSA code is blinded but not constant time, elliptic curves other
than P-224 are not constant time and the Lucky13 attack might work. I hope to
address the latter two in the Go 1.2 timeframe with a constant-time P-256
implementation and AES-GCM.

Nobody has stepped forward to do a review of the TLS stack however and I’ve
not investigated whether we could get Matasano or the like to do it. That
depends on whether Google wishes to fund it._

[https://blog.golang.org/a-conversation-with-the-go-
team](https://blog.golang.org/a-conversation-with-the-go-team)

I've also had a discussion with one of the Caddy developers who recommended
for production usage to front it with something that does TLS for you,
precisely because no one really seems to know the state of TLS in Go. Arguably
other TLS implementations have other issues but there's something to be said
for "the devil you know".

~~~
tptacek
I would generally expect Go's TLS 1.2 defect rate to be competitive with those
of other mainstream TLS implementations. That code is very well regarded and
designed by domain experts.

I'm one of the founders of Matasano, and started the crypto practice within
Matasano that would have done that Go TLS review, and I can say pretty
confidently that compared to the attention Go TLS already gets from experts,
the long-term benefit of us reviewing it as a formal project would have been
marginal.

~~~
baby
Currently being in that crypto practice, and having found the latest CVE on
golang that affected their TLS stack (was found in the bignum package), I'm
confident of the inverse.

~~~
kkl
Considering:

* Golang's TLS stack is far less complex in comparison to other projects.

* Golang's TLS stack is written in a "safe" language.

* Golang's TLS stack is written by individuals with lots of experience in SSL/TLS (and its flaws!).

* Contributions to the project are held to very high standards.

Why do you believe the inverse is true?

~~~
baby
> Golang's TLS stack is far less complex in comparison to other projects.

TLS is the definition of complex =)

> Golang's TLS stack is written in a "safe" language.

Not all bugs are memory corruption bugs.

> Golang's TLS stack is written by individuals with lots of experience in
> SSL/TLS (and its flaws!) > Contributions to the project are held to very
> high standards

True, I would expect the code to be of high quality and the bugs to be sparse.
But even knowing this, you always want to have other pair of eyes looking at
your code. An audit done by other experts brings a lot to the table.

PS: also, I think an audit would cost a negligible cost to Google =)

------
pavs
I don't understand why Cloudflare is used by so many sites. I would guess that
for 95% of it's users, it doesn't solve any real problem.

[http://www.slashgeek.net/2016/06/07/cloudflare-making-
intern...](http://www.slashgeek.net/2016/06/07/cloudflare-making-internet-
little-bit-faster-select-group-people/)

~~~
unchaotic
They've got a very compelling free tier to get you roped in. Works great as a
CDN, integrated SSL, great interface, DDoS protection / firewall, page rules -
there are just a few of the useful features.

Is there a more comprehensive free tier anywhere else ?

P.S. I'm not saying they are the best choice. They are simply too convenient &
comprehensive to get started. With a single click your site can "claim" to be
HTTPS even though the upstream connection "may not" be encrypted.

~~~
joepie91_
Right, but this is the pretty much precisely the problem. It's just the Nth
generation of "just centralize the internet through us and we'll take care of
everything for you", but this time marketed at startups. All the usual
problems with centralization still apply.

(It's still not really "DDoS protection", by the way. They just don't offer
that on their free plan.)

~~~
giovannibajo1
It only matters because they have no competitors, otherwise you'd say the same
applies to AWS hosting the origin servers, and the databases of those very
startups likely to use Cloudflare.

In the end, if there were true competitors, it probably wouldn't matter much,
they would be just one popular service that handed your data like many others

~~~
manigandham
They also have plenty of competitors, the CDN space has more companies than
the ISP space so the internet is already more centralized at a much deeper
level. This a silly claim by the OP.

------
jfindley
While I agree with many of the points made, his estimation of the number of
round trips is _way_ off. For a start, TLS negotiation requires several round
trips before you even start speaking HTTP. Secondly, browsers (depending on
vendor, version and number of domains) have a limit to the number of in flight
requests. Thirdly, many pages load some assets via javascript execution, which
adds another set of round trips.

As Cloudflare are _very_ widely peered (I think they are now the most widely
peered company), and as such are almost certainly closer to the end user than
the origin server. This does really matter when making lots of round trips,
which is in practice closer to inevitable (unless you have a small SPDY or
HTTP/2 site, which is approximately no one).

~~~
joepie91_
> While I agree with many of the points made, his estimation of the number of
> round trips is way off.

It was admittedly a simplified equation, more for illustrative purposes than
for argumentative purposes.

> For a start, TLS negotiation requires several round trips before you even
> start speaking HTTP.

While true, I'm using just-HTTP as a baseline. There are various techniques
for reducing TLS roundtrip time, and it's so heavily dependent on the
environment that it doesn't make for a practical baseline.

> Secondly, browsers (depending on vendor, version and number of domains) have
> a limit to the number of in flight requests.

Correct. But this is typically solved by bundling assets (on the server side)
or increasing that amount (on the client side). On a well-designed site, this
should not pose issues.

> As Cloudflare are very widely peered (I think they are now the most widely
> peered company), and as such are almost certainly closer to the end user
> than the origin server.

The point is that the same applies to Anycast CDNs (which CloudFlare is not,
really, it's a proxy), but without the privacy issues. CF isn't really a good
solution to this.

~~~
giovannibajo1
I don't think anybody is arguing that Cloudflare provides unique features that
can't be obtained otherwise. The point is that CF is VERY convenient to use
compared to having to do everything you mentioned through different services
and then even some more. With Cloudflare you essentially get distributed DNS,
a very fast and universally peered CDN, SSL support, IPv6 support, DNSSEC
support, HTTP/2 support, website optimizations like responsive images or JS
packing, all essentially with one click and for free, and without having to
change a line in your code. And if you hit a DDOS, you swipe your card and you
are done. This is their USP.

Stating that "you can write code and/or do/configure/buy things so that in the
end you can avoid using it" is true, but it's a hard sell for an average
business. The only way to avoid the Cloudflare monoculture is that true
competitors arise. As much as you can hate it, a reverse proxy seems like what
people want for this, and Cloudflare has even developed a workaround for the
trust issues (keyless SSL) that competitors could offer to non enterprise
costumers. I think there's space in that market.

~~~
joepie91_
This is essentially arguing that "letting a centralized gatekeeper do all this
is easier". While technically true, it also completely misses the point of the
web and how it was designed - namely, to be decentralized and _not_ require
this.

The thing is that "ease of use" isn't the only metric that matters, even if
it's the easiest metric to _sell_. More often than not - especially in more
recent 'startup culture' \- something being 'easier' just means that it's not
doing it correctly, and that somebody somewhere is conveniently ignoring the
tradeoffs.

~~~
giovannibajo1
No, what I am arguing is that the CF design of "glorified reverse proxy" is
basically a very good product with strong market demand that faces close to
zero competition.

Compare this to AWS. AWS also "runs" shitloads of the web today; but still,
the "centralization" problem is less mentioned in the context of AWS because
there is fierce competition (Google, and also OpenStack and all the OpenStack
clouds from major vendors).

Reverse proxies are "centralized gatekeepers" but no more than a hosting
provider is, and we accept those as normal (right?).

What if there were 5-6 big players in the "reverse proxy" market, plus a
hundred of smaller offerings? Wouldn't that basically solve all the issues of
those worried about the "open web"?

------
fweespeech
> This may not sound that bad - after all, they're just a service provider,
> right? - but let's put this in context for a moment. Currently, CloudFlare
> essentially controls 11% of the 10k biggest websites, over 8% of the 100k
> biggest websites (source), and almost 5% of sites on the entire web
> (source). According to their own numbers from 2012(!), they had more traffic
> than several of the most popular sites and services on earth combined, and
> almost half the traffic of Facebook. It has only grown since. And unlike
> every other backbone provider and mitigation provider, they can read your
> traffic in plaintext, TLS or not.

[https://www.datanyze.com/market-share/cdn/](https://www.datanyze.com/market-
share/cdn/)

Amazon and Akamai are both larger providers than Cloudflare. Akamai also can
function similarly to the criticism leveled against Cloudflare (i.e. No TLS to
the edge, so it can be meddled with.)

Tbh, I'd be more worried about Amazon's position on that pie than Cloudflare
since its comparable to Google's.

[https://www.comscore.com/Insights/Rankings/comScore-
Releases...](https://www.comscore.com/Insights/Rankings/comScore-Releases-
January-2016-US-Desktop-Search-Engine-Rankings)

Its rarely healthy for any market to have a majority owned by a single player,
even if Tech tends to generate winner-take-most situations in the marketplace.

------
yoo1I
And let's not forget their laissez-fair approach to abuse reports, which they
generally answer with

> We are a reverse proxy, we are not responsible

disregarding any evidence that spamming/DOS/malware/phishing operations are
protected by their rproxy services (essentially hiding the actual hoster,
which prevents sending abuse reports there) and _enabled_ by their providing
authoritative DNS and TLS certificates.

They do this pretending anything that comes from their network is not their
responsibility, while at the same time giving tor users a hard time.

Thanks CloudFlare!

~~~
beardog
They do not target Tor users directly (although since Tor is a country code
there is an off-by-default option to block/annoy all Tor users), they merely
target any IP that is associated with malicious traffic, which is what many
Tor exit nodes are.

(I do agree that they should make it easier to send abuse reports to the real
owner, however)

------
cm2187
One reason I know CloudFlare is increasingly used is that I am too often
welcomed with a CloudFlare page when I expected to see the site. I am not
convinced this is good UX.

~~~
_Understated_
It's horrific UX.

I use a VPN and I constantly get hit with the cloudflare page and a lot of the
sites I see it on are small, amateur sites or personal sites that just need a
Wordpress page.

They don't need Cloudflare.

Getting bloody sick of this new centralization of the web. It's a worrying new
trend that the very nature of how the Internet works is being reshaped and
controlled by a handful of entities.

~~~
Sylos
Here's a fun game for the family: Search for "cheesecake" in your search
engine of choice and then count how many links you have to check before you
find a webpage which does not connect to one of: Google, CloudFlare, Akamai,
Amazon, Facebook, Twitter

For me, it was 34 links. And that 34th link was the Wikipedia-article for
cheesecake...

~~~
_Understated_
^ My point exactly.

There are a handful of entities that now control the flow of the Internet's
information and that worries me deeply.

------
bckygldstn
Previous discussion, with comments from the author:

[https://news.ycombinator.com/item?id=12096321](https://news.ycombinator.com/item?id=12096321)

------
chflags
"... breaks the trust model of SSL/TLS,"

Certainly _some_ of the encryption one can get via SSL/TLS is worth something.
(But then one could use that encryption outside of TLS, too.)

And _maybe_ some elements of the protocol are worth something.

But on the _open internet_ is the "trust model" really worth anything?

It is so ridiculously easy to subvert. Cloudflare does it on a mass scale.

But one does not need to be Cloudflare to do it. The "incovenience" of
subverting SSL/TLS is minimal.

Any website who is delegating their DNS to some third party is potentially
vulnerable not to mention any user who is delegating their DNS lookups to a
third party. Those are very large numbers.

Note I said open internet. I am not referring to internal networks.

Also - Question for the author: Was the archiving of dnshistory.org
successful? Did they recently shut down and use Cloudflare to block
ArchiveTeam?

~~~
jsn
Not sure I understand what you are saying. If you are saying that "Any website
who is delegating their DNS to some third party is potentially vulnerable" to
subverting SSL/TLS, then you are absolutely wrong. Malicious DNS can help the
attacker to insert her servers between the user and the web service the user
is trying to access, but it doesn't subvert TLS/SSL man-in-the-middle
protection in any way.

~~~
cherioo
Malicious DNS can request cert for the domain via e.g. let's encrypt, then it
can do whatever it wants.

~~~
jsn
My understanding is that it doesn't apply at least to EV certificates. Also,
the parent says that "any user who is delegating their DNS lookups to a third
party", but that can't apply to such users either.

------
chungy
CloudFlare is basically making much of the web unusable over Tor. That's my
main beef against them.

~~~
manigandham
They're trying to fix this: [https://blog.cloudflare.com/the-trouble-with-
tor/](https://blog.cloudflare.com/the-trouble-with-tor/)

~~~
0xmohit
Response from Tor: [https://blog.torproject.org/blog/trouble-
cloudflare](https://blog.torproject.org/blog/trouble-cloudflare)

------
benevol
From an admin standpoint, the SSL/MITM security issue is just huge.

From a user perspective, I just don't visit sites anymore that force me to
solve a stupid captcha.

And I also hate the fact that I am additionally forced to submit to being
tracked by Google (via the captcha).

------
Veratyr
Ohhh boy.

> Single-homed bandwidth can be gotten for $0.35/TB, DDoS mitigation services
> are plentiful and sometimes even provided by default, and the web is
> generally Fast Enough.

This only works when you're buying _a lot_ of bandwidth or you're buying cheap
bandwidth (which usually has sub-standard routing). If you host your app
servers on a standard cloud like AWS you're paying dollars per TB (but you're
on a damn good network). DDoS mitigation services in many cases consist
primarily of "we'll blackhole your IP if one happens". DDoS mitigation
services that are affordable and leave your site running are costly.

The web is _maybe_ generally Fast Enough when you're lucky to be on an ISP and
network connection that gives you a decent path to wherever your content is
hosted but that's not a given, particularly these days when the majority of
most services' users are mobile, users are increasingly geographically
distributed and consumer ISPs are increasingly hostile towards service
providers (e.g. if your transit was through Cogent, Comcast's Netflix dispute
may have interfered).

> Essentially, there's not really a reason to use CloudFlare anymore, and the
> majority of sites won't see any real benefit from it at all. I'll go into
> the alternatives further down the article, but I want to address some of the
> problems that CloudFlare introduces first.

You haven't provided nearly enough evidence to backup this statement.

> Encryption

Yes, like any CDN, CloudFlare needs to have access to your content in order to
cache it. Like any CDN, if the connection between the edge nodes and your own
servers is not secure, a hostile ISP can do whatever it wants with it. This
isn't CloudFlare specific or even CloudFlare universal. This here is a fault
in your backend service. CloudFlare offers you the option to encrypt that
traffic and you've chosen not to.

> In contrast, CloudFlare is just a reverse proxy with a very fast connection.
> Layer 3/4 attacks (those aimed at the underlying network infrastructure,
> rather than the application or protocol itself) will only ever reach up to
> the point where it's handled by a server rather than just passed through,
> and in a "reverse proxy"-type setup, that server is CloudFlare. They're not
> actually mitigating anything, it just so happens that they are the other
> side of the connection and thus "take the hit"!

So what you're saying is that a DDoS isn't hitting my servers and my users
still get their content? That's called DDoS mitigation. Just because it
doesn't work the way you're used to doesn't mean it's not working.

> Indeed it is essentially impossible to archive something that's in "I'm
> Under Attack" mode, despite that usually being the exact moment where
> archival is necessary!

Preventing automated systems from making requests to your site when you're in
the middle of a DDoS seems sensible enough. If it's truly necessary (and
permitted), contact the site and ask for the IP of the backend. If your work
is appreciated, they'll give it to you. As a site operator, if you want to
archive my site, I'd rather you contact me. I'll give you my backend IP and
hell, might even give you rsync access or something. Archiving through a
browser is the least desirable way to have my stuff archived.

> In most of the Western world, connectivity is pretty good. You can go from
> most places in the US to Europe and back - across the ocean! - in about 140
> milliseconds. A commonly used metric in the web development industry is that
> your page and all your assets should be loaded in under 300 milliseconds.

I'm located in Silicon Valley and a ping to Germany takes 172ms, a ping to
Canada takes 90, a ping to Amsterdam takes 154 and so on. A ping to San Jose
where my nearest CloudFlare/Akamai/everything POP is located takes 14ms.

> Assuming you're declaring all the assets on your page directly, that would
> make it two roundtrips totalling about 280 milliseconds, since the assets
> can be retrieved in parallel.

This is incredibly optimistic. Open up the Network tab in Chrome's dev tools
and open Amazon, Facebook, even a WordPress blog sometime. Hell, HN's front
page barely loads that fast.

> CloudFlare can't cache the actual pageloads locally, because they are
> dynamic and different for everybody.

This depends entirely on the content of the page. Not all content is dynamic.
Blogs and news sites for example are largely static. Further, CloudFlare can
cache the static _parts_ of the page and send only the dynamic content:
[https://blog.cloudflare.com/cacheing-the-uncacheable-
cloudfl...](https://blog.cloudflare.com/cacheing-the-uncacheable-cloudflares-
railgun-73454/)

> So why not just use a CDN? Using a CDN means you can still optimize your
> asset loading, but you don't have to forward all your pageloads through
> CloudFlare. Static assets are much less sensitive, from a privacy
> perspective.

CloudFlare is a CDN. Why not use CloudFlare as a CDN for your static assets?
CloudFlare isn't making you turn it on for all your domains. You can totally
turn CloudFlare on for static.mysite.com and leave mysite.com on your own
server.

> And this is the problem with CloudFlare in general - you can't usually make
> things faster by routing connections through somewhere, because you're
> adding an extra location for the traffic to travel to, before reaching the
> origin server.

This is the same for every CDN and like every CDN, you're relying on the CDN's
internal network to get somewhere faster than your own would, and for the
CDN's cache to eliminate the need even to do the round trip. If the content is
already in Asia, the CDN doesn't need to make the request back to the origin
at all. That eliminates entire intercontinental round trips and that's
massive.

> Unfortunately, all of these issues together mean that CloudFlare is
> essentially breaking the open web. Extreme centralization, breaking the
> trust model of SSL/TLS, a misguided IP blocking strategy, requiring specific
> technologies like JavaScript to be able to access sites, and so on. None of
> this benefits anybody but CloudFlare and its partners.

No, you have opinions biased by (valid but not universal) philosophies and
concerns. These features are desired and beneficial to many people.

This is way too much text for me so I'll stop here.

TL;DR: This article mainly complains about things that are common to every
CDN, while demonising CloudFlare specifically for unknown reasons. The rest is
mainly complaints about Under Attack Mode.

~~~
ffwd
> Like any CDN, if the connection between the edge nodes and your own servers
> is not secure, a hostile ISP can do whatever it wants with it.

Unless I'm mistaken, most regular small/medium sized users of CDN will use a
'plug n play' type CDN, where the CDN just pulls from the origin server via
the public http, and in that scenario you can't really fake SSL if you didn't
set it up on your server, and your users won't believe that they are browsing
through https when on your site. Cloudflare changes this model and
superficially tells the user they're using https, but then on the second link
to cloudflare, it's unencrypted. Even worse, as we can see here and elsewhere,
a lot of people explicitly sign up to cloudflare for SSL! That means most
likely they didn't set up ssl on their server.

> I'll give you my backend IP and hell, might even give you rsync access or
> something. Archiving through a browser is the least desirable way to have my
> stuff archived.

Yeah but this is the most optimistic view of it all. If you are at all
familiar with archiveteam and others, the main method for archiving web sites
is through the public web site. For many reasons, site admins might not want
to give access directly to their server, so the most atomic and simplest path
is to simply crawl the web site, in order to 'get everything' (all the sites),
as long as you don't flood the server with requests and such, which most don't
do.

> No, you have opinions biased by (valid but not universal) philosophies and
> concerns. These features are desired and beneficial to many people.

So you don't have any worries about Cloudflare and the centralization? What
about tor users right to privacy and how the capchas are completely insane?
Cloudflare is unfortunately a huge pain in the ass and I'm not sure they can
be trusted. There's no proof they are connected to any governments as far as i
know, but they have now become this standard thing that everyone enables
because it's free, and the surveillance possibilities are _vast_, even worse
than cookies/advertising IMO because there is almost no way to circumvent it
as a normal end user

~~~
Veratyr
> Cloudflare changes this model and superficially tells the user they're using
> https, but then on the second link to cloudflare, it's unencrypted. Even
> worse, as we can see here and elsewhere, a lot of people explicitly sign up
> to cloudflare for SSL! That means most likely they didn't set up ssl on
> their server.

This essentially pushes any MITM to CloudFlare's network, which is _usually_
better than the user's and so far has exactly one confirmed interception. This
is a valid concern and could certainly be better but I believe eliminating the
CloudFlare -> User vector from a potential attack is a good thing.

> Yeah but this is the most optimistic view of it all. If you are at all
> familiar with archiveteam and others, the main method for archiving web
> sites is through the public web site. For many reasons, site admins might
> not want to give access directly to their server, so the most atomic and
> simplest path is to simply crawl the web site, in order to 'get everything'
> (all the sites), as long as you don't flood the server with requests and
> such, which most don't do.

While I generally support archival efforts, making a large number of automated
HTTP requests (you're archiving the entire site after all) while I'm in the
middle of a DDoS is not appreciated, particularly if any of that content has
to come from a database (because you're accessing old stuff that isn't in my
site cache). This could make a barely tolerable DDoS completely take down my
origin.

> So you don't have any worries about Cloudflare and the centralization? What
> about tor users right to privacy and how the capchas are completely insane?
> Cloudflare is unfortunately a huge pain in the ass and I'm not sure they can
> be trusted. There's no proof they are connected to any governments as far as
> i know, but they have now become this standard thing that everyone enables
> because it's free, and the surveillance possibilities are _vast_, even worse
> than cookies/advertising IMO because there is almost no way to circumvent it
> as a normal end user

Like I said, the philosophies and concerns have some merit but they're not
universal. I have no issues with CloudFlare and "centralisation". If
CloudFlare is shown to commit some kind of wrongdoing there's absolutely
nothing stopping me from moving elsewhere.

------
daxorid
_In 2011, however, it was pretty much impossible to get working DDoS
mitigation for less than $100 a month_

I would have loved to know who provided this service so cheaply back then.
IIRC in 2011, Prolexic and BlackLotus were your only options starting at
$5k/mo, and you also had to be large enough to own an ASN because GRE was your
only option.

~~~
joepie91_
I forgot the name, but it was some reseller of Awknet. They offered DDoS-
mitigated hosting services on their own infrastructure. I think GigeNET also
had a slightly more expensive offering, but I'm not sure whether that was ever
publicly announced.

------
manigandham
This whole post is ridiculous and comes off as some personal attack without
much technical merit:

\- Every big network company is at the mercy of government. Not sure what the
point is here... so we should ban all big companies? Everyone from the ISP to
the website host to the network equipment manufacturer can and might be
compromised.

\- Every CDN today is a reverse proxy and MITM is what they do. That's just
how it fundamentally works. No magical way around this.

\- CF supports websockets now in addition to HTTP(S) for every plan. If you
need more protocol support than use a service specialized for that, CF clearly
states that they don't focus on mail or game servers.

\- Who cares if they do mitigation? What I want is my origin to be protected,
that's it. If they soak it up with network capacity or have advanced processes
doesn't matter to me.

\- Free plans are free, so they have every right to kick you off if you
consume too many resources and are getting DDOS'ed all the time. Pro plans
also get plenty of protection, you have to be seriously under attack to have
them contact you about it. And in that case, 200/month is probably one of the
cheapest options considering most other hosts (like AWS) will be happy to bill
you like crazy or just cant even handle it.

\- The "under attack" option is supposed to pose problems, because you're
under attack. It's pretty clear that it's not the normal mode of operation.
Don't turn this on unless you really need it.

\- Not sure what the issue is with having to whitelist bots with them. A
whitelist approach is far better than trying to maintain an infinite
blacklist. Also they are more advanced than simple IP filters, that approach
stopped working a decade ago.

\- Connectivity is not good, even in much of the western world, and varies
widely between location, device, capacity, etc. Latency is a real physical
limitation that can only be overcome by being closer to users. Try browsing a
site in another continent that's not using a CDN and see what happens. Also CF
_is_ a CDN, not sure how "use a CDN" was an answer to this.

The only real criticism is their Flexible SSL option that doesn't encrypt the
connection to the origin and this has been debated endlessly. I think their
recent announcement of free origin certs are a way to improve this but
ultimately it's a potential security risk and up to the website operator to
understand.

We use CF because they provide DNS, CDN, SSL, free bandwidth, DDOS protection
and better features than others for a single flat price. It works really well
for us but it's about understanding how it really works and the trade-offs. If
this doesn't work for _you_ and your security or business needs, then use
something else.

~~~
joepie91_
> \- Every big network company is at the mercy of government. Not sure what
> the point is here... so we should ban all big companies? Everyone from the
> ISP to the website host to the network equipment manufacturer can and might
> be compromised.

Covered in the article.

> \- Every CDN today is a reverse proxy and MITM is what they do. That's just
> how it fundamentally works. No magical way around this.

Nope. Covered in the article.

> \- CF supports websockets now in addition to HTTP(S) for every plan. If you
> need more protocol support than use a service specialized for that, CF
> clearly states that they don't focus on mail or game servers.

How does them stating this make it not a problem?

> \- Who cares if they do mitigation? What I want is my origin to be
> protected, that's it. If they soak it up with network capacity or have
> advanced processes doesn't matter to me.

But your origin _isn 't_ protected, that's the point. Only _their_ servers
are.

> \- Free plans are free, so they have every right to kick you off if you
> consume too many resources and are getting DDOS'ed all the time. Pro plans
> also get plenty of protection, you have to be seriously under attack to have
> them contact you about it. And in that case, 200/month is probably one of
> the cheapest options considering most other hosts (like AWS) will be happy
> to bill you like crazy or just cant even handle it.

You're comparing to mitigation-less providers. Compare to providers that offer
mitigation instead. Apples and oranges.

> \- The "under attack" option is supposed to pose problems, because you're
> under attack. It's pretty clear that it's not the normal mode of operation.
> Don't turn this on unless you really need it.

It only poses problems for legitimate users, not the attacker(s). Covered in
the article.

> \- Not sure what the issue is with having to whitelist bots with them. A
> whitelist approach is far better than trying to maintain an infinite
> blacklist. Also they are more advanced than simple IP filters, that approach
> stopped working a decade ago.

Covered in the article.

> \- Connectivity is not good, even in much of the western world, and varies
> widely between location, device, capacity, etc. Latency is a real physical
> limitation that can only be overcome by being closer to users. Try browsing
> a site in another continent that's not using a CDN and see what happens.
> Also CF is a CDN, not sure how "use a CDN" was an answer to this.

And CloudFlare doesn't actually make this better. Covered in the article. And
no, CloudFlare is not a CDN - it's an Anycast proxy.

> The only real criticism is their Flexible SSL option that doesn't encrypt
> the connection to the origin and this has been debated endlessly. I think
> their recent announcement of free origin certs are a way to improve this but
> ultimately it's a potential security risk and up to the website operator to
> understand.

Still doesn't solve the problem, as covered in the article.

\---

Did you actually _read_ the article, or just skim it?

~~~
manigandham
I read your article and replied to each major section. Nothing is "covered" as
I've clearly stated the issues.

It seems like you fundamentally don't understand what a CDN is, how it works,
how latency affects website performance, and have a strange idea of
"mitigation" when in actuality most DDOS protection works exactly the same
way. There's no difference between Fastly, CloudFront, MaxCDN or other
companies doing the exact same thing, except that CloudFlare has a few unique
features and you don't like them.

Here's a test: show me _exactly_ how using Fastly in front of my webapp is
different than using CloudFlare?

~~~
joepie91_
> I read your article and replied to each major section. Nothing is "covered"
> as I've clearly stated the issues.

I'll even quote the relevant sections for you.

> \- Every big network company is at the mercy of government. Not sure what
> the point is here... so we should ban all big companies? Everyone from the
> ISP to the website host to the network equipment manufacturer can and might
> be compromised.

"And unlike every other backbone provider and mitigation provider, they can
read your traffic in plaintext, TLS or not."

(Addendum: Compromising a server is much harder to do at dragnet scale than
MITMing.)

> \- Every CDN today is a reverse proxy and MITM is what they do. That's just
> how it fundamentally works. No magical way around this.

"Using a CDN means you can still optimize your asset loading, but you don't
have to forward all your pageloads through CloudFlare. Static assets are much
less sensitive, from a privacy perspective."

> \- The "under attack" option is supposed to pose problems, because you're
> under attack. It's pretty clear that it's not the normal mode of operation.
> Don't turn this on unless you really need it.

"Oh, and about that "I'm Under Attack" mode that you get on the Free plan as
well? Yeah, well, it doesn't work. But don't take my word for it - here's
proof. That code will solve the 'challenge' that it presents to your browser,
in a matter of milliseconds. Any attacker can trivially do this. And the
challenge can't be made more difficult, because it would make it prohibitively
expensive for mobile and embedded devices to use anything hosted at
CloudFlare.

But while it doesn't stop attackers, it does stop legitimate users.

[...]

Some might argue that these kind of archival bots are precisely what
CloudFlare is meant to protect against, but that's not really true. If that
were the case, why would there be an offer to add ArchiveBot to the whitelist
to begin with? Why would the Wayback Machine be on that very same whitelist?"

> \- Not sure what the issue is with having to whitelist bots with them. A
> whitelist approach is far better than trying to maintain an infinite
> blacklist. Also they are more advanced than simple IP filters, that approach
> stopped working a decade ago.

"I've been told that ArchiveBot can be added to the internal whitelist that
CloudFlare has, but this completely misses the point. Why do I or anybody else
need to talk to a centralized gatekeeper to be able to access content on the
web, especially if there might be any number of such gatekeepers? This kind of
approach defeats the very point of the web and how it was designed!

And for a volunteer-run organization like ArchiveTeam, it's far more tricky to
implement support for these "challenge schemes" than it is for a botnet
operator, who stands to profit from it. That problem only becomes worse as
more services start implementing these kind of schemes, and often it takes a
while for people to notice that their requests are being blocked - sometimes
losing important information in the process."

> \- Connectivity is not good, even in much of the western world, and varies
> widely between location, device, capacity, etc. Latency is a real physical
> limitation that can only be overcome by being closer to users. Try browsing
> a site in another continent that's not using a CDN and see what happens.
> Also CF is a CDN, not sure how "use a CDN" was an answer to this.

"But perhaps you're also targeting users in regions with historically poor
connectivity, such as large parts of Asia. Well, turns out that it doesn't
really work there either - CloudFlare customers routinely report performance
problems in these regions that are worse than they were before they switched
to CloudFlare.

This is not really surprising, given the mess of peering agreements in Asia;
using CloudFlare just means you're adding an additional hop to go through,
which increases the risk of ending up on a strange and slow route.

And this is the problem with CloudFlare in general - you can't usually make
things faster by routing connections through somewhere, because you're adding
an extra location for the traffic to travel to, before reaching the origin
server. There are some cases where these kind of techniques can make a real
difference, but they are so rare that it's unreasonable to build a business
model on it. Yet, that's precisely what CloudFlare has done."

> The only real criticism is their Flexible SSL option that doesn't encrypt
> the connection to the origin and this has been debated endlessly. I think
> their recent announcement of free origin certs are a way to improve this but
> ultimately it's a potential security risk and up to the website operator to
> understand.

"But let's pretend that CloudFlare realizes that Flexible SSL was a mistake,
and removes the option. They'd then require TLS between CloudFlare servers and
the origin server as well. While this solves the specific problem of other
ISPs meddling with the connection, it leaves a bigger problem unsolved: the
fact that CloudFlare itself acts as an MITM (man-in-the-middle). By the very
definition of how their system works, they must decrypt and then re-encrypt
all traffic, meaning they will always be able to see all the traffic on your
site, no matter what you do."

\--

So yes, it's all covered in the article. If you believe that something isn't
fully addressed, or it somehow isn't accurate, or you don't understand how it
relates to that - then _ask concrete questions_. Don't just throw your hands
up in the air going "BUT IT DOESN'T COVER THAT!", when it clearly has.

> It seems like you fundamentally don't understand what a CDN is, how it
> works, how latency affects website performance, and have a strange idea of
> "mitigation" when in actuality most DDOS protection works exactly the same
> way.

No, it doesn't. From the article:

"Traditional DDoS mitigation services work by analyzing the packets coming in,
spotting unusual patterns, and (temporarily) blocking the origin of that
traffic. They never need to know what the traffic contains, they only need to
care about the patterns in which it is received. This means that you can
tunnel TLS-encrypted traffic through a DDoS mitigation service just fine,
without the mitigation service ever seeing the plaintext traffic... and you're
still protected."

> There's no difference between Fastly, CloudFront, MaxCDN or other companies
> doing the exact same thing, except that CloudFlare has a few unique features
> and you don't like them.

Again, straight from the article:

"While there are some newer providers that offer similar services to
CloudFlare - and I consider them bad on exactly the same grounds - they run on
a much smaller scale, and have much less impact."

> Here's a test: show me exactly how using Fastly in front of my webapp is
> different than using CloudFlare?

When did I ever claim it was? If it works the same, it's prone to the same
issues. This is a complete strawman.

------
iRobbery
Nice read, I couldn't agree more, though a paragraph about their lack of
responsibility when they proxy malicious crap via their network would have
been appropriate too. So often have i send complaints where they are part of
serving malicious content and they send you some copy paste reply 'we are not
hosting the content' while they could easily do something about it, by for
example stop proxying that crap.

And their HN posts annoy me too, now that is just my problem, but for some
reason they almost always seem to get posted twice..

------
SShrike
Looks like I'll have to actually get around to properly setting up SSL on my
website after reading this, only used CloudFlare because I was lazy.

------
sebcat
The plaintext TLS part reminded me of the "SSL added and removed here :v)"
slide regarding google's infrastructure

------
james33
I have to disagree on the CDN point made. We did extensive benchmarks several
months ago and found CloudFlare to be the fastest or near the fastest in every
metric (and is saving us $100's per month). It works fantastically well as a
low-cost CDN, and yes CDN's have a lot of value to a lot of sites.

[http://goldfirestudios.com/blog/142/Benchmarking-Top-CDN-
Pro...](http://goldfirestudios.com/blog/142/Benchmarking-Top-CDN-Providers)

------
lllorddino
I use Cloudflare because I host my static website on Github. The flexible ssl
mode is good enough because there's no user data being passed around only
articles of mine.

I've used the full ssl mode on self hosted servers and can't see what the
dilemma is besides you being paranoid that Cloudflare will tamper with data
passing through them. Evidence?

~~~
lorenzhs
GitHub pages does support SSL now, so you can use "Full" SSL mode ( _not_
"Full (strict)") with GitHub pages now. We do this for glowing-bear.org, which
is just a bunch of static files too.

~~~
joepie91_
Then why would you use CloudFlare at all? You already have TLS.

~~~
lorenzhs
We want to use a custom domain, but TLS with custom domains isn't possible
with github pages. [https://glowing-bear.github.io/glowing-
bear/](https://glowing-bear.github.io/glowing-bear/) isn't exactly nice to
type.

------
aftbit
Where can I get bandwidth for $0.35/TB?

~~~
joepie91_
Hurricane Electric: [https://he.net/](https://he.net/)

Seems they're actually down to $0.32/TB now. At least, the price per TB has
historically mirrored the per-mbps price (you have to pay for both,
separately), so I'm assuming that it's $0.32/TB now as well.

Some VPS providers - I can't immediately recall which - will also charge about
$0.50/TB without having to pay per mbps at all. That's usually a mix of HE and
Cogent.

------
mschuster91
The advantage of Cloudflare is, indeed, not protection from bots speaking
HTTP.

Its main advantage is that it protects you as a site operator from SYN floods,
traffic reflection attacks (ohai NTP, DNSoverUDP) and similar attacks. Oh, and
it also protects your server from idiots doing portscans.

~~~
tremon
_it also protects your server from idiots doing portscans._

Why does your server need protection from that?

------
mgalka
Completely agree. It actually made my site slower, killed many speed
optimization I had implemented.

------
jguegant
I almost never used cloudflare. But I really appreciate their blog, it's very
a effective advertisement.

------
0xmohit
Cloudflare "make web properties faster and safer".

Such statements are amusing at best. Seems analogous to an advertisement of a
chocolate drink claiming to turn morons into Einsteins.

