Hacker News new | past | comments | ask | show | jobs | submit login
Cloudflare Reverse Proxies Are Dumping Uninitialized Memory (chromium.org)
3238 points by tptacek on Feb 23, 2017 | hide | past | favorite | 992 comments



Oh, my god.

Read the whole event log.

If you were behind Cloudflare and it was proxying sensitive data (the contents of HTTP POSTs, &c), they've potentially been spraying it into caches all across the Internet; it was so bad that Tavis found it by accident just looking through Google search results.

The crazy thing here is that the Project Zero people were joking last night about a disclosure that was going to keep everyone at work late today. And, this morning, Google announced the SHA-1 collision, which everyone (including the insiders who leaked that the SHA-1 collision was coming) thought was the big announcement.

Nope. A SHA-1 collision, it turns out, is the minor security news of the day.

This is approximately as bad as it ever gets. A significant number of companies probably need to compose customer notifications; it's, at this point, very difficult to rule out unauthorized disclosure of anything that traversed Cloudflare.


In case you're wondering how this could be worse than Heartbleed:

Yes, apparently the allocation patterns inside Cloudflare mean TLS keys aren't exposed to this vulnerability.

But Heartbleed happened at the TLS layer. To get secrets from Heartbleed, you had to make a particular TLS request that nobody normally makes.

Cloudbleed is a bug in Cloudflare's HTML parser, and the secrets it discloses are mixed in with, apparently, HTTP response data. The modern web is designed to cache HTTP responses aggressively, so whatever secrets Cloudflare revealed could be saved in random caches indefinitely.

You really want to see Cloudflare spend more time discussing how they've quantified the leak here.


You really want to see Cloudflare spend more time discussing how they've quantified the leak here.

What would you like to see? The SAFE_CHAR logging allowed us to get data on the rate which is how I got the % of requests figure.


How many different sites? Your team sent a list to Tavis's team. How many entries were on the list?


We identified 3,438 unique domains. I'm not sure if those were all sent to Tavis because we were only sending him things that we wanted purged.


3438 domains which someone could have queried, but potentially data from any site which had "recently" passed through Cloudflare would be exposed in response, right? Purging those results helps with search engines, but a hypothetical malicious secret crawler would still potentially have any data from any site.


It doesn't have to be a secret crawler. Just one that wasn't contacted by cloudflare (I didn't see any non-US search providers mentioned).


In other words, Baidu are currently sitting on a treasure trove of keys and passwords.


Possibly not, Baidu and CloudFlare have a well-documented long-term partnership.


Maybe there's much more to worry about Baidu's particular not-so-well documented, but longer-term partnerships.


Oh, absolutely. Baidu's relationship with their host nation should be a source of concern for us all. I've heard some interesting and unusual stories.

But they're probably aware of this issue and know enough to go looking to purge their caches.


Or Baidu know enough to not purge their caches. Think of the amount of tangible gratitude that their host nation would show them for access to some potentially tasty information....


Swap baidu for google or microsoft in that sentence and it still has the same problems. Every government 3 letter agency has a vested interest in the secrets.


Whether you believe it or not, there is actually a tangible difference between the relationships US corporations have with the USG vs other nations and their corporate entities.


They're not all 3 letters. (e.g. GHCQ, ASIO, CSIS, DGSI, etc.)


It's an expression


Well, purge their public cache, after taking a private dump and supplying it to those who would find value in such a thing.


>"I've heard some interesting and unusual stories."

Do you care to share or elaborate on this?


They're not my stories to share, I'm afraid.


+Yandex


I wonder if archive.org or archive.is have anything cached...


archive.is was red, meaning it uses Cloudflare....

www.doesitusecloudflare.com


The concern isn't that they use Cloudflare. The concern is that they're spidering the Internet, and therefore might be storing cached data that Cloudflare leaked.


while the internet archive / wayback machine do spider, I think archive.is only archives a site "on demand"


Yes but with all the people and even automated 3rd-party scripts making use of archive.is, it is practically a spider.


No TLS on this site?


correct


Have you asked them for an eta on your shirt?


You know a company isn't serious about security when their top security bounty is a t-shirt. Instagram has a better policy, for God's sake.


Instagram has been part of Facebook for over four years, so they are covered by the Facebook Bug Bounty: https://www.facebook.com/whitehat


I'd love to see some evidence that big bounties correspond to more exploits being found. In my experience, they tend to result in an increasing number amount of crap for your security team to sort through.


Plenty of companies that are serious about security don't do bounties. They're a real pain to administer apparently


I'd expect for a company that can MITM a good chunk of the Internet to incur that pain in exchange for all the money customer pay them.


fuck :(


Indeed, this is the point in the comment thread where you get the feeling the internet is broken.


What I'm wondering: how many fuckups like this need to happen for website owners to realize that uber-centralization of vital online infrastructure is a bad idea?

But I guess there is really no incentive for anyone in particular to do anything about this, because it provides a kind of perverted safety in numbers. "It's not just our website that had this issue, it's, like, everyone's shared problem." The same principle applies to uber-hosting providers like AWS and Azure, as well as those creepy worldwide CDNs.

Interestingly, it seems this is one of the cases where using a smaller provider with the same issue would really make you better off (relatively speaking) because there would be fewer servers leaking your data.


Cheaply fix DDoS attacks as Cloudflare does and people will move away. It's a big problem and the general consensus is, "just use Cloudflare to fix your DDoS problem!"


You might as well scrap http entirely, with or without the "s".

The web simply doesn't scale. The only way to fix DDoS reliably is peer-to-peer protocols. Which hardly ever happens because our moronic ISPs believed nobody needed upload. Or even a public IP address.


as someone who has been involved in a number of moronic ISP designs, operations, and build outs --- asymmetric access networks are designed that way due to actual traffic patterns and physical medium constraints.

you can argue "if everything was symmetric, then traffic patterns would be different" and you might be right, but that's not how the market went or how the "internet" started.

the client-server paradigm drove traffic patterns, and there was never any market demand or advantage by ignoring it.


That's not how the market went because the market is often moronic. Case in point: QWERTY. (Why QWERTY is actually the best layout ever is left as an exercise to the occasional extremist libertarian)

Yes, traffic patterns at the time was heavily slanted towards downloads. I know about copper wires and how download and upload limit each other. Still, setting that situation in stone was very limiting. It's a self fulfilling prophecy.

You don't want to host your server at home because you don't have upload. The ISP sees nobody has servers at home so they conclude nobody needs upload. Peer-to-peer file sharing and distribution is slower than YouTube because nobody has any upload. Therefore everybody uses YouTube, and the ISP concludes nobody uses peer-to-peer distribution networks.

And so on and so forth. It's the same trend that effectively forbid people to send e-mail from home (they have to ask a big shot provider such as Gmail to do it for them, with MITM spying and advertisement), or the rise of ISP-level NAT, instead of giving everyone a public IPv6 address like they all deserve (including on mobile).

There is a point where you have to realise the internet is increasingly centralised at every level because powerful special interests want it to be that way.

Regulation is what we need. Net neutrality is a start. Next in line should be mandated symmetric bandwidth, no ISP-wide firewall (the local router can have safe default settings), public IP (v4 or v6) for everyone, and no restriction on usage patterns (the ISP should not be allowed to forbid servers). Ultimately, our freedom of expression and freedom of information depends on this. They are messing with human rights.


> Peer-to-peer file sharing and distribution is slower than YouTube because nobody has any upload.

And because IP multicast doesn't work over the internet. If it did, even if merely to some limited extent, some asymmetries would be far easier to stomach.


> you can argue "if everything was symmetric, then traffic patterns would be different" and you might be right, but that's not how the market went or how the "internet" started.

It may not have been how the market went but it definitely was how the internet got started.


You say this as I look at my positively anemic upstream that makes browsing even simple Nagios pages painfully slow, and my ISP that doesn't offer anything substantively better without a massive increase in monthly costs.

The traffic patterns for higher upstream aren't there because they can't be there.


Decentralisation doesn't do a whole lot better. Just think about MTA or DNS vulnerabilities, for a start.


Or look at how many websites are still vulnerable to Heartbleed.


The Internet will remain periodically broken until we put a cost metric on the breaking (and working) times.


Which means any user who has used any service which uses CloudFlare, right? At least in theory.


How can I find out which services I have accounts with are using cloudflare? Or better have been using cloudflare in recent months? Assume I have a list of domains, where I have accounts.


We're compiling a list of affected domains using several scrapers here:

https://github.com/pirate/sites-using-cloudflare


I ranked your list of Cloudflare-using domains by their Alexa rank.

Sharing here in case anyone else finds it useful

(warning - it's 1.1MiB gzipped / 2.4MiB uncompressed)

https://polarisedlight.com/tmp/cf_ranked.txt

any domains outside the top 1 million are ommitted


Hacked this together to determine which ones out of the list are potentially using cloudflare reverse proxies. You could also send an HTTP request to them and look for the cloudflare-nginx Server header.

https://gist.github.com/dustyfresh/4d8d364ca4c6da465cfc7d817...


You can check IP whois records, but it'll be very hard to be 100% sure about any of them. For example, one of the examples from the bug report is Uber, which doesn't use Cloudflare for its home page but apparently does for one of its internal API endpoints.


There is a chrome extension named "claire"[1] which tells you if they use CloudFlare or not, but not sure about other browser (FF or else).

[1]: https://chrome.google.com/webstore/detail/claire/fgbpcgddpmj...


For Firefox, I just made this: https://github.com/traktofon/cf-detect


At this point, I would just start rolling everything. (And I have.)


[edit: correction]


No. 3438 domains were configured to expose this, and were potentially queried and logged by a far greater number of people. And yet other data (anything in cloudflare for months) could be exposed.

Potentially huge amounts of stuff might be exposed, but I have some assurances that "the practical impact is low" from someone I trust, so I think it's just a lot of random data. I'd still rotate all credentials which passed through Cloudflare in the past N months (and if I were a big consumer site NOT on Cloudflare, I might change end user passwords anyway, due to re-use), but I don't think it will be the end of the world.


It may seem like a nightmare Internet data security scenario, but it looks like Tavis is going to get a free t-shirt out of the deal, so let's just call it a wash.


What anomalies would be apparent in your logs if someone malicious had discovered this flaw and used it to generate a large corpus of leaked HTTP content?


That's also what I'm interested in. There's a lot of talk about the sites that had the features enabled that allowed the data to escape, but it's the sites that were co-existing with those that were in danger.

In terms of the caching, knowing the broken sites tells you where to look in the caches after the fact, but do you have any idea of who's data was leaked? Presumably 2 consecutive requests to the same malformed page could/would leak different data.


> Presumably 2 consecutive requests to the same malformed page could/would leak different data.

Wouldn't the second request be served from the CDN cache? Since for Cloudfare that particular page is a valid cached page, it would send you that same page on the second request.


Only if the leaked memory is in the response before the response is cached.


I don't know enough about the layers in the cloudflare system to say. Does it only apply to cached pages? What about https? They would have the ssl termination first and then these errant servers behind that - none of those pages would be cached, right?


Cloudflare doesn't cache HTML pages by default.


it seems to me you'd have to know at a minimum:

1. every tag pattern that triggers the bug(s)

2. which broken pages with that pattern were requested at an abnormally high frequency or had an unusually short TTL (or some other useful heuristic)

3. on which servers, and at what time, in order to tell

4. who's data lived on the same servers at the same time as those broken pages

to even begin to estimate the scope of the leak. and that doesn't even help you find who planted the bad seeds.


Here's a question your blog post doesn't answer but should, right now:

Exactly which search engines and cache providers did you work with to scrub leaked data?


Also, have you worked with any search engine to notify affected customers.

ex: Right now there is in an easily found google cached page with OAuth tokens for very popular fitness wearable's android API endpoints


Are you guys planning to release the list so we can all change our passwords on affected services? Or are you planning on letting those services handle the communication?


That list contains domains where the bug was triggered. The information exposed through the bug though can be from any domain that uses Cloudflare.

So: all services that have one or more domains served through Cloudflare may be affected.

The consensus seem to be that no one discovered this before now, and no bad guys have been scraping this leak for valuable data (passwords, OAuth tokens, PII, other secrets). But the data still was saved all over the world in web caches. So the bad guys are now probably after those. Though I don't know how much 'useful' data they would be able to extract, and what the risks for an average internet user are.


> The consensus seem to be that no one discovered this before now, and no bad guys have been scraping this leak for valuable data (passwords, OAuth tokens, PII, other secrets).

This is literally as bad as it gets, anyone trying to palliate the solution has something to sell you. You'd have to be an idiot to think that $organization (public, private, or shadow) doesn't have automated systems to check for something as stupid simple as this by querying resources at random intervals and searching for artifacts.

Someone found it. Probably more than one someone. Denial won't help.


Ah, gotcha. Thanks for explaining!


Myself and 4 other people I know all happened to get their reddit accounts temporarily locked due to a "possible compromise" in the past week or so, which has never happened to any of us before. Anyone else?


That would be unrelated to this. We haven't taken any action on any accounts because of this issue and have no plans to, as we (reddit.com) were unaffected.


Happened to me as well. If it's not related to CloudBleed, can you tell us specifically what happened? It's making me not trust Reddit.


If anything, it should make you trust reddit more! I don't know the exact details as to why your account may have been locked, but generally it will be because we're being proactive and have some signal that your account is using a weak or reused password.


Why was reddit on the list of affected sites, and how do you know reddit wasn't affected?


My reddit password failed a week ago, and I had to do an email reset. And I use a password manager.


In that case I'm even more inclined to think it might be because of Cloudbleed.


I've compiled a list of 7,385,121 domains that use Cloudflare here: https://github.com/pirate/sites-using-cloudflare


This list is misguided. It's just a dump of sites using Cloudflare's DNS, a hugely popular and (mostly) free service. The vulnerability only affected customers using Cloudflare's paid SSL proxy (CDN) service. The latter is a much smaller subset. Even then, only a subset of the SSL proxy users, those with certain options enabled that caused traffic to go through a vulnerable parser, were really impacted. I'm not sure a list as broad as this is helpful.


At least some of this is incorrect. The issue is NOT the pages running through the parser — the issue is the traffic running through the same nginx instance as vulnerable pages.


You are right in that other sites are affected but only the sites running through the parser would have leaked content in their cached pages.


This is not correct in my understanding: The sites with certain options enabled produced the erroneous behavior, but the data that would get leaked through this behavior could be from any site that uses Cloudflare SSL (as this requires Cloudflare to tunnel SSL traffic through their servers, decrypt it and re-encrypt it with their wildcard certificate). So if I understand correctly anyone using the (free) Cloudflare SSL service in combination with their DNS is affected.


I was wrong about the nature of the proxy issue, but right about DNS-only customers. Customers using only the free DNS service were not impacted by this at all, because traffic never flowed through the proxies.


Ah yes, sure if you only use DNS then your data never touches a CloudFlare server. Lucky you ;)


(whoops forgot to remove dupes, it's only 4,287,625) https://github.com/pirate/sites-using-cloudflare/raw/master/...


If I'm understanding correctly, that list would include not only the 3,438 domains with content that triggered the bug, but every Cloudflare customer between 2016-09-22 and 2017-02-18.


Can we trust it was only those domains?


Not really. If a site is using Cloudflare protection for only some of their subdomains they do not show on this list even if the site itself is in the alexa top 10k sites.

And of course all other sites that are not in alexa 10k are not in this list (if they are not on some other lists used, you can see the source of lists in the README of the Github repo).


No. Only Cloudflare customers using a subset of features of the SSL proxy service are impacted.

Cloudflare has a lot of customers who only use the free DNS service, for example.


Careful. It appears that any Cloudflare client who was sending HTTP/S traffic through their proxies is affected. A small subset of their customers had the specific problem that triggered the bug, but once triggered, the bug disclosed secrets from all their web customers.

You're not exposed if you never sent traffic through their proxies; for instance, if you somehow only used them for DNS.


I suspect there are a large number of Cloudflare customers that only use their DNS. I have a couple of domains in this category.

The DNS service is essentially free. It's an upgrade from most registrars' built-in DNS. It's a pretty robust solution, really -- global footprint, DNSSEC, fully working IPv6, etc.

My point is, the actual number of impacted customers was much smaller than the entire set of Cloudflare customers. There are lists in this thread that still reference hundreds of thousands (millions?) of sites, and that's just wrong.

(I agree on your first point though; I was confused about the nature of the proxy bug at first).


What I find remarkable is that the owners of those sites weren't ever aware of this issue. If customers were receiving random chunks of raw nginx memory embedded in pages on my site, I'd probably have heard about it from someone sooner, surely?

I guess there is a long tail of pages on the internet whose primary purpose is to be crawled by google and serve as search landing pages - but again, if I had a bug in the HTML in one of my SEO pages that caused googlebot to see it as full of nonsense, I'd see that in my analytics because a page full of uninitialized nginx memory is not going to be an effective pagerank booster.


Perhaps as a follow up to this bug, you can write a temporary rule to log the domain of any http responses with malformed HTML that would have triggered a memory leak. That way you can patch the bug immediately, and observe future traffic to find the domains that were most likely affected by the bug when it was running.

Or is the problem that one domain can trigger the memory leak, and another (unpredictable) domain is the "victim" that has its data dumped from memory?


I believe that's the real issue. Any data from any couldflare site may have been leaked. Those domains allow Google etc to know which pages in their cache may contain leaked info, unfortunately the info itself could be from any request that's travelled through cloudflare's servers.


Yes, the victim can be a different site. Cloudflare's post mentions this: " Because Cloudflare operates a large, shared infrastructure an HTTP request to a Cloudflare web site that was vulnerable to this problem could reveal information about an unrelated other Cloudflare site. " https://blog.cloudflare.com/incident-report-on-memory-leak-c...


It shouldn't be too difficult to feed an instrumented copy of the parser some fraction of their cached pages (after all, that's what they're for.. right?) and calculate a percentage of how many triggered e.g. valgrind, or just some magic string tacked on the end of the input appearing in the output or similar

I prefer CloudScare to Cloudbleed :)


Downpour is my preference right now. The clouds are dumping everything they got


How about Cloudburst?


CloudBust


If only CloudShare wasn't a thing already. :)


I'd suggest "FlareOut".


Cloudflush.


ShitFest


It is far from over, too! Google Cache still has loads of sensitive information, a link away!

Look at this, click on the downward arrow, "Cached": https://www.google.com/search?q="CF-Host-Origin-IP:"+"author...

(And then, in Google Cache, "view source", search for "authorization".)

(Various combinations of HTTP headers to search for yield more results.)


> The infosec team worked to identify URIs in search engine caches that had leaked memory and get them purged. With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory. Those 770 unique URIs covered 161 unique domains. The leaked memory has been purged with the help of the search engines.

So I tried it too, and there's still data cached there.

Am I misunderstanding something - that above statement must be wrong, surely?

They can't have found everything even in the big search engines if it's still showing up in Google's cache, let alone the infinity other caches around the place.

EDIT: If the cloudflare team sees I see leaked credentials for these domains:

android-cdn-api.fitbit.com

iphone-cdn-client.fitbit.com

api-v2launch.trakt.tv


I'm also seeing a ton from cn-dc1.uber.com with oauth, cookies and even geolocation info. https://webcache.googleusercontent.com/search?q=cache:VlVylT...


That's terrifying.

Thanks to Uber now requiring location services on Always instead of just when hailing a car, my and others' personal location history even outside of Uber usage could have been compromised. Sweet.


To be fair, you were kind of a fool if you actually let Uber have your location at all times. As soon as they announced that I blocked Uber from my location. I only allow it when I take an Uber (which is almost never now).


Sometimes I'm in a rush and forget to turn it back to Never.

That doesn't make me a fool, it makes me human. Don't be a jerk. It's a dark pattern for a reason.


If you only sometimes forget, then that's not letting them have your location at all times, and you weren't called a fool.


Not a fool but ...


At least the location isn't embarrassing.[1]

[1] https://goo.gl/maps/FjQVttcZCpH2


Oh my gosh, that's the Ivey Business School, where I graduated from last year. I didn't expect this to hit so close to home...


so sorry for your loss


What did it show before it was taken down? In vague terms, of course.


Could someone enlighten me on why malloc and free don't automatically zero memory by default?

Someone pointed me to MALLOC_PERTURB_ and I've just run a few test programs with it set - including a stage1 GCC compile, which granted may not be the best test - and it really doesn't dent performance by much. (edit: noticeably, at all, in fact)

People who prefer extreme performance over prudent security should be the ones forced to mess about with extra settings, anyway.


Some old IBM environments initialized fresh allocations to 0xDEADBEEF, which had the advantage that the result you got from using such memory would (usually) be obviously incorrect. The fact that it was done decades ago is pretty good evidence that it's not about the actual initialization cost: these things cost a lot more back then.

What changed is the paged memory model: modern systems don't actually tie an address to a page of physical RAM until the first time you try to use it (or something else on that page). Initializing the memory on malloc() would "waste" memory in some cases, where the allocation spans multiple pages and you don't end up using the whole thing. Some software assumes this, and would use quite a bit of extra RAM if malloc() automatically wiped memory. It would also tend to chew through your CPU cache, which mattered less in the past because any nontrivial operation already did that.

I personally don't think this is a good enough reason, but it is a little more than just a minor performance issue.

That all being said, while it would likely have helped slightly in this case, it would not solve the problem: active allocations would still be revealed.


> Some old IBM environments initialized fresh allocations to 0xDEADBEEF, which had the advantage that the result you got from using such memory would (usually) be obviously incorrect.

On BSDs, malloc.conf can still be configured to do that: on OpenBSD, junking (fills allocations with 0xdb and deallocations with 0xdf) is enabled by default on small allocations, "J" will enable it for all allocations. On FreeBSD, "J" will initialise all allocations with 0xa5 and deallocations with 0x5a.


> What changed is the paged memory model: modern systems don't actually tie an address to a page of physical RAM until the first time you try to use it (or something else on that page). Initializing the memory on malloc() would "waste" memory in some cases, where the allocation spans multiple pages and you don't end up using the whole thing. Some software assumes this, and would use quite a bit of extra RAM if malloc() automatically wiped memory. It would also tend to chew through your CPU cache, which mattered less in the past because any nontrivial operation already did that.

Maybe an alternative approach is to simply mark the pages to be lazily zeroed out when attached, in the Page Table Entries of the MMU. They wouldn't be zeroed out at the time of the call malloc(), but only when they are attached to a physical memory location (the first time you use it).


And it seems to me the OS should ensure the pages are zero'd out rather than user space (via malloc()) doing it, because it's still a security hole to let a process read data that it's not supposed to have access to (whether it's from another process or the kernel - it doesn't matter).


OS already zeroes out pages, obviously. But malloc doesn't usually request memory to the OS but takes a chunk from the already allocated heap.


Unsure, not my job. But I read stuff along those lines. A modern OS plays all sorts of games to delay doing work. Allocate a couple of megs of memory and the OS sets up some pointers in a page table. And yes it'll keep already zero'd pages handy. And mark pages as dirty to be scraped clean later.


It doesn't need to affect your CPU cache, because x64 processors have non-temporal writes (streaming stores) that bypass the cache.

The stuff about eagerly allocating pages is spot on though.

There is calloc which allocates and zeroes memory, but people don't use it as often as they should.


Parsers don't usually need to hold onto what they're parsing for a very long time, so unless they were running this parallel on a machine with 4k cores, I'd imagine it would be much more likely that a buffer overrun hits the middle of an already-freed allocation rather than going into an active one.

In terms of "wasting" memory, perhaps the kernel could detect that you are writing 0s to a COW 0 page and still not actually tie the page to physical RAM. (If you're overwriting non-0 data, well it's already in a physical page.)

I don't quite follow the details of the CPU cache issue and why that is more-than-minor.

I do think in this day and age we should be re-visiting this question seriously in our C standard libraries. If the performance issues are actually major problems for specific systems, the old behaviour could be kept, but after benchmarking to show that it really is a performance problem.


In terms of "wasting" memory, perhaps the kernel could detect that you are writing 0s to a COW 0 page and still not actually tie the page to physical RAM.

Writing to your COW zero page causes a page fault. Now, in theory you could disassemble the executing instruction and if it's some kind of zero write, just bump the instruction pointer and go back to userspace - but then the very next instruction in your loop that zeroes the next 8 bytes will cause the same page fault. And the next. And the next...

Taking a page fault for every 8 bytes in your allocation is completely infeasible. You'd be better off taking the hit of the additional memory usage.


How about this idea: free() zeros or unmaps all memory it allocated. This shouldn't fault. The OS zeros pages when mapping them into the process space (which it should do anyway). I think that solves the problem.


free() doesn't know what portion of the memory you allocated actually got written to. So for the model where a large, page-spanning buffer is allocated and only a small portion used, this approach causes many unnecessary page faults at free () time as it tries to zero out lots of memory that was never used or paged in at all.


Large buffers just get unmmaped so the OS can fix that problem.


An invariant you get from most kernels is that all new memory pages are zeroed when mapped into processes (normally through mmap or sbrk), so you only have the paging problem when initializing with a value other than zero.


Zeroing on malloc and/or free would not have prevented this type of error, since the information disclosure was due to an overflow into an adjacent allocated buffer.

However, zeroing on free is generally a useful defense-in-depth measure because can minimize the risk of some types of information disclosure vulnerabilities. If you use grsecurity, this feature is provided by grsecurity's PAX_MEMORY_SANITIZE [0].

[0]: https://en.wikibooks.org/wiki/Grsecurity/Appendix/Grsecurity...


Zeroing on alloc/free probably wouldn't have helped much with this bug. Data in live allocations would still be leaked.


> Could someone enlighten me on why malloc and free don't automatically zero memory by default?

The computational cost of doing so, I suspect.


Just like why most filesystems don't zero deleted files.


Neither of these are good reasons: I already talked about MALLOC_PERTURB_ (man mallopt) in my post and my naive performance tests, and we rarely get bad security holes based on data from deleted files left on filesystems.


Unfortunately, people write microbenchmarks of malloc and free a lot (and not completely without reason: they do quite often show up high in profiles).

For example, binary-trees on the Benchmarks Game is basically malloc/free bound (or at least is supposed to be as Hans Boehm originally designed it). Likewise, most JavaScript benchmarks (V8 splay, for example) are heavily influenced by raw allocation performance. Many people choose browsers and programming languages based on relatively small differences in these results. All of the incentives align in favor of performance, not security, because performance is easy to measure and security is not.


You asked for a reason, not for a good reason.

malloc/free were designed around 1972. That was a time where performance was much more important and security concerns didn't really exists.

Modern systems, like Go, do zero-out newly allocated memory because they do consider a bit more security to be more important than a bit more performance.

But changing the defaults of malloc/free is not really an option and it would probably break stuff.

Especially on Linux, where, I believe, malloc returns uncommitted pages, which increases the perf advantage in some cases.

Security conscious programmers can use calloc() or write their own wrappers over malloc/free.


they aren't good reasons now. They were good reasons ~20 years ago.

language spec should probably now default to zeroing memory unless you specifically ask it not to....and maybe that should be a verbose option :)


Are these results hardware independent? Maybe it makes a difference on older machines, or different architectures.


I imagine clearing memory on free is more relevant than MALLOC_PERTURB_?


calloc zeroes memory on allocation.


Yes, I think the question was something like "why doesn't malloc call calloc?".


Always nice to have options. Not zeroing memory on allocation might save a few cpu cycles.


It's pretty much the definition of false economy. Would you rather save a few cycles or suffer debilitating security bugs at random intervals? Always use calloc unless a) there's a proven performance problem and b) you know for a fact that due to careful inspection/static analysis/black magic malloc is safe. Then use calloc anyway because why risk it?


It depends on the size of the chunk of allocated memory. If it is quite large, time spent zeroing it can be substantial. Then again, if you're allocating in performance critical path, you're doing it wrong anyways.


It takes time to do that.


> that above statement must be wrong, surely?

Either they believe it's right, which means they're not competent enough to really assess the scope of the leak; or they don't believe it, but they went "fuck it, that's the best we can do".

In either case, it doesn't really inspire trust in their service.


you missed one possibility: that they're deliberately attempting to downplay the severity to make themselves look less incompetent


jgrahamc: can you list which public caches you worked with to attempt to address this? It does not inspire confidence when even google is still showing obvious results


Google, Microsoft Bing, Yahoo, DDG, Baidu, Yandex, and more. The caches other than Google were quick to clear and we've not been able to find active data on them any longer. We have a team that is continuing to search these and other potential caches online and our support team has been briefed to forward any reports immediately to this team.

I agree it's troubling that Google is taking so long. We were working with them to coordinate disclosure after their caches were cleared. While I am thankful to the Project Zero team for their informing us of the issue quickly, I'm troubled that they went ahead with disclosure before Google crawl team could complete the refresh of their own cache. We have continued to escalate this within Google to get the crawl team to prioritize the clearing of their caches as that is the highest priority remaining remediation step.



Thousands of years from now, when biological life on this planet is all but extinct and superintelligent AI evolving at incomprehensible rates roam the planet, new pieces of the great PII pollution incident that CloudFlare vomited across the internet are still going to be discovered on a daily basis.


I was expecting this:

Thousands of years from now, when biological life on this planet is all but extinct and superintelligent AI evolving at incomprehensible rates roam the planet, taviso will still be finding 0-days impacting billions of machines on an hourly basis.

Be glad that Google is employing him and not some random intelligence agency.


I have huge respect for taviso and his team. Their track record in security work is so impressive. They are without a doubt extremely capable.

However, I am always wondering: are they really globally unique in their work and skill? So that they are really the ones finding all the security holes before anyone else does because they are just so much better (and/or with better infrastructure) than anyone else? Or is it more likely that on a global scale there are other teams who at least come close regarding skill and resources, but who are employed by actors less willing to share what they found?

I really do hope Tavis is a once-in-a-lifetime genius when it comes to vulnerability research!


One of the big conservatories in the infosec world are people who sell 0-day exploits to "security companies." Some go for the tens of thousands of dollars. Ranty Ben talked about how some people live off this type of income, when it came up in a panel discussion at Ruxcon 2012.


No he is definitely not alone, some of them work for other security companies, for antivirus companies, some of them are selling found vulnerabilities


What's funny is he kinda just stumbled upon this bug accidentally while making queries.

If I were just casually googling two weeks ago and came across a leaked cloudflare session in the middle of my search results I think I would have vomited all over my desk immediately. Dude must have been sweating bullets and trembling as he reached out on twitter for a contact, not knowing yet how bad this was or for just how long it's been going on.




I believe the 2009 Yahoo-Bing agreement is still in force, where Bing provides search results on Yahoo.com:

http://news.bbc.co.uk/2/hi/business/8174763.stm

I know the search I performed now on Yahoo states "Powered by Bing™" at the bottom.


Yeah, I thought that could be it as well but was at the bottom of the Yahoo result:

<!-- fe072.syc.search.gq1.yahoo.com Sat Feb 25 03:58:27 UTC 2017 -->

Given they are identical results it's pretty clear it must be a shared index I suppose, that or the leaked memory was cached.


Yahoo provides a front end to the search results, Bing provides the crawl/search/archives.


What the hell does Yahoo even do anymore? Just email? Or is that just a proxy to hotmail?


Finance, News, Mail, Fantasy Sports, etc to name a few where they are still in the top three of the category.

Yahoo was never really a search company (even its founding, it was a "directory", not a "search"). Sure, they pretended fairly well from 2004ish (following their move off Google results) to 2009 (when they did the Bing deal), but the company never really nailed search or more importantly search monetization despite acquiring one of the first great search engines (Altavista) and the actual inventor of the tech Google stole for its cash cow Adwords (Overture).


Isn't Yahoo search just a frontend to bing nowadays?


Some IPv6 internal connections, some websocket connections to gateway.discord.gg, rewrite rules for fruityfifty.com's AMP pages, and some internal domain `prox96.39.187.9cf-connecting-ip.com`.

And some sketchy internal variables: `log_only_china`, `http_not_in_china`, `baidu_dns_test`, and `better_tor`.


Exactly, it looks that the cleaning people up to now only looked for the most obvious matches (just searching for the Cloudflare unique strings). There's surely more where "only" the user data are leaked and are still in the caches.


The event where one line of buggy code ('==' instead of '<=') creates global consequences, affecting millions, is great illustration of the perils of monoculture.

And monoculture is the elephant in the room most pretend not to see. The current engineering ideology (it is ideology, not technology) of sycophancy towards big and rich companies, and popular software stacks, is sickening.


How about clearing all the cache? (Or at least everything created the last few months.)

I've never seen anyone suggest it, I suppose It cannot or should not be done for some reason?


You are asking for deleting petabytes of data. Some sides are interested in owning such data.


The real problem is going to be where history matters and you can't delete - for example archive.org and httparchive.org. There is no way to reproduce the content in the archive obviously, so no one will be deleting it. The only way is to start a massive (and I mean MASSIVE) sanitization project...


or clearing all the cache of Cloudflares website. I think that's do-able.


At this moment problem is not in Cloudflare's side, search engines crawled tons of data with leaked information, even though Cloudflare drops their caches, data is already in 3rd party servers (search engines, crawlers, agencies)


That's why he asked that the caches of all Cloudflare sites are dropped, not by Cloudflare but by these 3rd parties.


That might work. If said 3rd parties were interested in helping. Most of them might be but it just takes one party refusing to help and then you've still got the data out there.


no I meant, get a list of all domains using Cloudflare, get that removed from the cache of Crawlers.


Offtopic: "with all due respect" is often followed by words void of respect.


He is British. "With all due respect" means no respect is due. I don't think it's possible to show less respect while appearing polite. In other words, them's fighting words.

http://todayilearned.co.uk/2012/12/04/what-the-british-say-v...


This is perfectly fine if the amount of respect due is sufficiently low.


Given the answers that cloudflare is giving I's say it's quickly approaching zero.


Ha! Excellent point!


Incredible. Are they really trying to pin it on Google? Yes, clearing cache would probably remove some part of the information from public sources. But you can never clear all cache world-wide. Nor can you rely that the part that was removed was really removed before being copied elsewhere.

The way I see it, time given by GZero was sufficient to close the loophole, it was not meant to give them chance to clear caches world-wide. They have a PR disaster on their hands, but blaming Google won't help with it.


You really have to see this to really grasp the severity of the bug.


The scope of this is unreal on so many levels.

20 hours since this post and these entries are still up ...


Can anyone provide some context please ?


For anyone being linked directly to the post: the link back to the parent page is right on top: https://news.ycombinator.com/item?id=13718752

You can also click on "parent", and repeat as necessary.


The bottom of the file has contents from another connection. Notably

    HTTP/1.1
    Host gateway.discord.gg



After 16 hours, those cached pages are still up...


While it is good that you discovered leaked content is still out in the wild, your tone is somewhat condescending and rude. No need for it.


You might not know the history here. Tavis works at Google and discovered the bug. He was extremely helpful and has gone out of his way to help Cloudflare do disaster mitigation, working long hours throughout last weekend and this week.

He discovered one of the worst private information leaks in the history of the internet, and for that, he won the highest reward in their bug bounty: a Cloudflare t-shirt.

They also tried to delay disclosure and wouldn't send him drafts of their disclosure blog post, which, when finally published, significantly downplayed the impact of the leak.

Now, here's the CEO of Cloudflare making it sound like Google was somehow being uncooperative, and also claiming that there's no more leaked private information in the Bing caches.

Wrong and wrong. I'd be annoyed, too.

--

Read the full timeline here: https://bugs.chromium.org/p/project-zero/issues/detail?id=11...


I think this is a one-sided view of what really happened.

I can see a whole team at Cloudflare panicking, trying to solve the issue, trying to communicate with big crawlers trying to evict all of the bad cache they have while trying to craft a blogpost that would save them from a PR catastrophe.

All the while Taviso is just becoming more and more aggressive to get the story out there. 6 freaking days.

short timeline for disclosures are not fun.


There was no panic. I was woken at 0126 UTC the day Tavis got in contact. The immediate priority was shut off the leak, but the larger impact was obvious.

Two questions came to mind: "how do we clean up search engine caches?" (Tavis helped with Google), and "has anyone actively exploited this in the past?"

Internally, I prioritized clean up because we knew that this would become public at some point and I felt we had a duty of care to clean up the mess to protect people.


> "has anyone actively exploited this in the past?"

Has this question been answered yet?


We're continuing to look for any evidence of exploitation. So far I've seen nothing to indicate exploitation.


>> "has anyone actively exploited this in the past?"

Wouldn't your team now even have to decide how to deal with this even after some specific well known caches have been cleared? I mean there's no guarantee that someone may not have collected all this data and use it to target those cloudflare customer sites. Are you planning to ask all your customers to reset all their access credentials and other secrets?


Google Project Zero has two standard disclosure deadlines: 90 days for normal 0days, and 7 days for vulnerabilities that are actively being exploited or otherwise already victimizing people.

There are very good reasons to enforce clear rules like this.

Cloudbleed obviously falls into the second category.

Legally, there's nothing stopping researchers from simply publishing a vulnerability as soon as they find it. The fact that they give the vendor a heads-up at all is a courtesy to the vendor and to their clients.


> The fact that they give the vendor a heads-up at all is a courtesy to the vendor and to their clients.

It is the norm, and it is called responsible disclosure. You're trying to do the less harm, and the less harm is a combination between giving some time to the developers to develop a fix and getting the news out there for customers and customers of customers to be aware of the issue.


With all due respect, they should suffer a pr catastrophe.


In this case I feel your comment is misdirected. Cloudflare was condescending in their own post above in which he was replying to- "I agree it's troubling that Google is taking so long" is a slap in the face to a team that has had to spend a week cleaning up a mess they didn't make. It is absolutely ridiculous that they are shitting on the team that discovered this bug in the first place, and to top it all off they're shitting all over the community as a whole while they downplay and walk the line between blatantly lying and just plan old misleading people.


I would be pretty mad if a website that I was supposed to trust with my data made an untrue statement about how something was taken care of, when it was not, and then publish details of the bug while cache it still out in the wild, and now exploitable by any hacker who was living under a rock during the past few months.


Actually I proxy two of my profitable startup frontend sites with CloudFlare, so I am affected (not really), but giving them the benefit of the doubt as they run a great service and these things happen.


They are well past deserving the benefit of the doubt.

I would also advise you notify your cloud-based services' customers how they might be affected (yes really), trust erosion tends to be contagious.


Agreed. The condescending downplaying tones displayed just aren't acceptable.


We only host our static corporate sites (not apps) and furthermore never used CF email obfuscation, server-side excludes or automatic https rewrites thus not vulnerable.


Hi,

I think you have misunderstood the issue. Just because YOU did not use those services does not mean your data was not leaked. It means that other peoples data was not leaked on YOUR site, but YOUR data could be leaked on other sites that were using these services.


We only host our static corporate sites (not apps)

If this part is true, they're not vulnerable. Only data that was sent to CloudFlare's nginx proxy could have leaked, so if they only proxy their static content, then that's the only content that would leak.

The rest of their comment gives the wrong impression though, yeah.


> Only data that was sent to CloudFlare's nginx proxy could have leaked, so if they only proxy their static content, then that's the only content that would leak.

The way it worked, the bug also leaked data sent by the visitors of the these "static sites": IP addresses, cookies, visited pages etc.


Thanks for clarifying. You are absolutely right.


So far as I know, nothing like this thing has ever happened at any CDN ever before.


There have definitely been incidents where CDNs mixed up content (of the same type) between customers. Not exactly like this, but close.


I find it troubling that the CEO of Cloudflare would attempt to deflect their culpability for a bug this serious onto Google for not cleaning up Cloudflare's mess fast enough.

Don't use CF, and after seeing behavior like this, don't think I will.


On a personal note, I agree with you.

Before Let's Encrypt is available to public use (beta), CF provided "MITM" https for everyone: just use CF and they can issue you a certificate and server https for you. So I tried that with my personal website.

But then I found out that they replace a lot of my HTML, resulting mixed content on the https version they served. This is the support ticket I filed with them:

  On wang.yuxuan.org, the css file is served as:

  <link rel="stylesheet" title="Default" href="inc/style.css" type="text/css" />

  Via cloudflare, it becomes:

  <link rel="stylesheet" title="Default" href="http://wang.yuxuan.org/inc/A.style.css.pagespeed.cf.5Dzr782jVo.css" type="text/css"/>

  This won't work with your free https, as it's mixed content.

  Please change it from http:// to //. Thanks.

  There should be more similar cases.
But CF just refuse to fix that. Their official answer was I should hardcode https. That's bad because I only have https with them, it will break as soon as I leave them (I guess that makes sense to them).

Luckily I have Let's Encrypt now and no longer need them.


Well, the CEO does have beef with Google: https://blog.cloudflare.com/post-mortem-todays-attack-appare...

This led to Cloudflare refusing to implement support for Google Authenticator for 4 years.


lol, really? Google authenticator is just TOTP - it's an open standard. That seems childish.

Also, the notion that the CEO of an internet company would have a "beef with Google" is pretty funny.


This comment greatly lowers my respect for Cloudflare.

Bugs happen to us all; how you deal with this is what counts, and wilful, blatant lying in a transparent attempt to deflect blame from where it belongs (Cloudflare) onto the team that saved your bacon?

I've recommended Cloudflare in the past, and I was planning, with some reservations, to continue to do so even after disclosure of this issue. But seeing this comment? I don't see how I can continue.

(For the sake of maximum clarity: I take issue: 1) with the attempt at suggesting the main issue is in clearing caches, not on the leak itself. It doesn't matter how fast you close the barn door after the horse is gone and the barn has burned down. 2) With the blatantly false claim that non-Google caches have been cleared, or were faster to clear than Google's. Cloudflare should know, better than anyone, the massive scope of this leak, and the fact that NO search engine's cache has or could be cleared of this leak. If you find yourself in a situation so bad you feel like you need to misdirect attention to someone else, and it turns out no one else is actually doing anything so you have to like about that...maybe you should just shut up and stop digging?)


Hey! Don't keep the horse locked in if the barn is burning!


> I agree it's troubling that Google is taking so long.

Google has absolutely no obligation to clean up after your mess.

You should be grateful for any help they and other search engines give you.


You're right, I guess. (Disclaimer: Not affiliated with any company affected / involved)

But I still find it troubling. Is it their mess? No. Does it affect a lot of people negatively - yes. I expect Google to clean this up because they're decent human beings. It's troubling because it's not just CloudFare's mess at this point.

It reminds me of the humorous response to "Am I my brother's keeper?", which is "You're your brother's brother"


Google cleaning this up is going to take a ton of man-hours, which will cost a LOT of money. How much money is Google obligated to spend to help a competitor who fucked up? Are they supposed to just drop everything else and make this the top priority?


I don't see this as them as helping a competitor. The damage has been done (in terms of customer relations).

I view leaving up the cached copy of leaked data as being a jerk move - not towards CloudFare, but to anyone whose data was leaked.

This is an opportunity for Google to show what they do with rather sensitive data leaks - do they leave them up or scrub them?

Had damage from the leak been aleady done (to those whose data it was)? Probably. Even taking that into account, I think the Google search comes off as a jerk in this situation.


I feel like you are operating under the assumption that deleting this leaked data is trivial, that they just have to hit a delete button and the data is gone.

This is not the case; it is not obvious, trivial, or easy to delete the leaked data. It is not simple to find it all. This is not like they are being given a URL and being asked to clear the cached version of it; they are being asked to search through millions of pages for possibly leaked content.


I despise the way you've dealt with this issue with as much dishonesty as you thought you could get away with.

I will be migrating away from your service first thing Monday. I will not use you services again and will ensure that my clients and colleagues are informed of you horrific business practices now and in the future.


Next time, beware of parsers. Or formally verify them :)

https://arxiv.org/pdf/1105.2576.pdf

(disclaimer: co-author)


For this who haven't been following along, this is the CEO of CloudFlare lying in a way that misrepresents a major problem CloudFlare created. Additionally, they are trying to blame parts of this problem on those that told them about the problem they created.


At least tell me they got their t-shirts lol.


>I'm troubled that they went ahead with disclosure before Google crawl team could complete the refresh of their own cache.

It sounded like they (cf) were under a lot of pressure to disclose ASAP from project zero and their 7 day requirement...


eastdakota is one of the cloudflare guys, so "they" in that sentence can only refer to Google (see also the previous paragraph/sentences, where eastdakota used "we" for cloudflare).


He's the CEO


With something this drastic, 7 days was generous.


>> We have continued to escalate this within Google to get the crawl team to prioritize the clearing of their caches as that is the highest priority remaining remediation step.

If you are using the same attitude as you use in this comment, with their team, i'm pretty sure they will be thrilled to keep aside all their regular work and help you out cleaning up a enormous mess created by a bug in your service.


Oh wow, taking a shit on Google after they helped you by reporting a critical flaw in your infrastructure.

I'm no longer using CF for my own projects, but you've just cemented my decision that none of my clients will either.


https://webcache.googleusercontent.com/search?q=cache:lw4K9G...

    Internal Upstream Server Certificate
    ...
    /C=US/ST=California/L=San Francisco/O=Cloudflare Inc./OU=Cloudflare Services - nginx-cache/CN=Internal Upstream Server Certificate
That really doesn't look good.


Just to point out, this is apparently a cert used for communicating between Cloudflare's services which has (presumably) been replaced. Cloudflare customer's certs weren't exposed.


Correct. That's that cert.


Just to be clear: is this a cert used for authenticating with Cloudflare's systems or just for encryption? If used for authentication, you need to ensure it hasn't been stolen and used before this was found by P0.


Lol, Google just purged that search.

EDIT: but there's still plenty of fish: http://webcache.googleusercontent.com/search?q=cache:lw4K9G2...

This will take weeks to clean, and that's just for Google.

EDIT2: found other oauth tokens, lots of fitbit calls... And this just by searching for typical CF internal headers on Google and Bing. There is no way to know what else is out there. What a mess.


Ouch, you really see everything :

> authorization: OAuth oauth_consumer_key ...

what a shit show. I'm sorry but at that point there must be consequences for incompetence. Some might argue "But nobody can't do anything" ...

I'm sorry, CF has the money to to ditch C entirely and rewrite everything from the ground up with a safer language, I don't care what it is, Go,Rust whatever.

At that point people using C directly are playing with fire. C isn't a language for highly distributed applications, it will only distribute memory leaks ... With all the wealth there is in the whole Silicon Valley, trillions of dollars, there is absolutely 0 effort to come up with an acceptable solution? all these startups can't come together and say: "Ok,we're going to design or choose a real safe language and stick to that"? where does all that money goes then? Because this bug is going to cost A LOT OF MONEY to A LOT OF PEOPLE.


These guys were probably saved by using OAuth - there is a consumer secret (which the "_key" is just an identifier for) and an access token secret, both of which are not sent over the wire. Just a signature based on them. (The timestamp and nonce prevent replay attacks.)

OAuth2 "simplified" things and just sends the secret over the wire, trusting SSL to keep things safe.


Does this have anything to do with CloudFlare's ambitious attempt to be the first service to proxy your https traffic to your users?

Perhaps the largest MITM ever eh?


This actually happened because they started to rewrite it all, according to their blog post.


Started to re-write it...in C


Good. They're trying to clean up all the private data leaked everywhere. I tempted to say "why couldn't they figure out this google dork themselves" but they've probably been slammed for the past 7 days cleaning up a bunch of stuff anyway.


You have no idea.


The effort you're putting into cleaning up someone else's mess cannot be understated, nor can it be sufficiently appreciated. Thanks!


Any chance you can describe why these cached pages missed the purge that cloudflare initiated? Seems like cloudflare should have brought an outside expert to try to exploit this issue before the disclosure was made.


For vulnerabilities with immediate exploit exposure, where people are currently being victimized by the flaw, Project Zero has a 7-day embargo.

The short waiting period balances the vendor's interest in coordinating the smoothest fix to the problem with the public's interest in knowing its exposure and maximizing it's options for reacting to the exposure.

The fixed waiting period keeps the process sane. Every vendor you'll ever disclose a serious vulnerability to will try to delay disclosure, usually repeatedly. If you set a precedent of making arbitrary exceptions, you'll never be able to stare anyone down.

Again: as the reporters, you're trying to balance the vendor's interests with those of the public. Your credibility in these situations is pretty important, not just for this vulnerability, but for the next ones. With P0, we all know there will be a long series of "next ones" to be concerned about.


I definitely understand the embargo, but this is one of those situations where the vuln was already fixed and it's likely very few malicious actors (possibly 0, but of course who knows) were aware of its existence.

I feel like adding even just another day or two would've allowed them to purge more of these search results. I think that would greatly outweigh the increased risk of letting it remain undisclosed for slightly longer.


Thank you for your thoughtful reply and realize the difficult situation you are in.


Hah, no, my situation is super easy; it is "partisan bystander." I don't work for Google.


FYI, I'm seeing some more of these results show up (with active caches) for the following searches:

"CF-RAY" "CF-Force-Miss-TS"

"X-SSL-Server-Name"

"Internal Upstream Server Certificate0"


CF-RAY isn't internal and will show up in any CloudFlare hosted site's response headers.


I'm aware of this, but combined with "CF-Force-Miss-TS" that search was turning up a number of clear examples of cached Cloudflare memory data.


Your hard work is appreciated.


Not sure if you'll see this, but I've noticed that the cache links have been removed on literally all hits for these queries.

And yet, I occasionally see working cache links on relevant unaffected pages.

Really, really awesome to see this kind of response. It's an obvious course of action (also considering corporate liability that you're publicly holding/offering this data) but it's really cool to see everyone work to fix this en masse so quickly.

I think a lot of people would enjoy hearing campfire battle stories of the past ~week once this is all over.


Thank you for all your hard work.


> This will take weeks to clean, and that's just for Google.

Couldn't Google just purge all cached documents which match any Cloudflare header? This will probably purge a lot of false positives, but it's just cached data, so would that loss really matter? My guess is that this approach should not take more than a few hours on Google's infrastructure.

Of course, this leaves the problem of all the other non-Google caches out there.


OAuth1 doesn't send the secrets with the requests, just a key to identify the secret and a signature made with the secret.

OAuth2 does send the secret, typically in an "Authorization: Bearer ..." header.

The uber stuff that somebody else linked to looks like a home-grown auth scheme and it appears that "x-uber-token" is a secret, but hard to know for sure.


So while people are having fun here with search queries, how many scripts are already up and running in the wild, scraping every caching service they can think of in creative ways for useful data...

This is an ongoing disaster, wasn't this disclosed too soon?


The "well-known chat service" mentioned by Tavis appears to be Discord, for the record.

edit: Uber also seems to be affected.


>It is a snapshot of the page as it appeared on Feb 21, 2017 20:20:45 GMT

So the issue wasn't fully fixed on Feb 19, or Google's cache date isn't accurate?


It seems like the reasonable thing for Google to do is to clear their entire cache. The whole thing. This is the one thing that they could do to be certain that they aren't caching any of this.


What about Bing, Baidu, Yandex, The Internet Archive, and Common Crawl? What about caches that are surely maintained by the NSA, ФСБ, and 3PLA?


Of course. Google dumping their cache puts only a small dent into the problem, but I feel that it's their responsibility to the innocent site operators caught in the middle of this.


Cloudflare's incompetence isn't Google's responsibility, particularly when Google wiping out their caches and damaging their own search results doesn't fix the problem. Hackers know how to use more than one search engine.


That only gives them an excuse to do nothing about this. All those companies should immediately go ahead and update any data that could have possibly leaked + inform their customers.


CF should be thankful Google is doing any of this, clearing their entire cache would cost Google $ to index web from scratch.


That might be a bit too extreme. But they should do something quickly to try to find all of these.


I would say cloudflare should hire them to try to find them. It's really not on google IMO (unless caching has some implications regarding storing sensitive data).



Wow, I just tried this, the first result with a google cache copy has a bunch of the kind of data described. Although there was only one result with a cache.


The second page had a result with an OAuth2 Bearer token in it.


PII, OAuth data, etc.


I've so far seen an oAuth key for fitbit (via their android app) and api keys for trakt (though apparently that service doesn't use them?)

I don't know, this just seems catastrophic.


I searched for

"CF-Host-Origin-IP:" token

.... uhm is that what I think I'm seeing???


The first couple I looked at were requests to Uber and Fitbit...


One of my Uber rides two weeks ago went completely nuts. Both my and my drivers app screwed up at the same time and I was never picked up and then seconds later the app claimed I reached my destination.

You have to wonder whether something like this is implicated.


That's one phenomenal leap of logic there. Why would you think that?


Merely that both my and the drivers app screwed up at the same time, and have a good chance of hitting the same Uber end-point.

Apps that consume APIs would be more sensitive to unexpected junk than browsers.


But there are so many other much more likely reasons why something like that would have happened, it is quite a leap to think that it is somehow related to this issue.


Without disagreeing, can you give me an example.

And it's just a speculation. Shrug.


One simple explanation could be the road was between very large concrete buildings or the area has some sort of GPS interference (there is one place in Tokyo that jumps my GPS and probably others' by about 300m to the same location every time). Another simple explanation is the software has a bug on when it thinks you arrive in some extremely bizarre scenario (hence you both had it happen simultaneously).

I don't know how it works in the back so this is all speculation of course.


Yep, but I'd already taken an Uber ride from the exact same place the day before. And everything went smoothly.


Probably not.

If someone knew about this exploit they're not going to be messing with people's Uber rides for lulz.


I wasn't implying intention.


this is quite bad. i hope google can put some effort in clearing it's cache too


Time to find out where various "booter" sites are actually hiding.


If anyone here is HIPAA-regulated or you have a customer who is, and you used Cloudflare during those dates, it is Big Red Button time. You've almost certainly got a reportable breach; depending on how tightly you're able to scope it maybe it won't be company-ending.


> If anyone here is HIPAA-regulated or you have a customer who is

Cloudflare certainly does; I founded a health tech company, and Cloudflare was the recommended go-to for health tech startups who needed a CDN while serving PHI.

And this is definitely a reportable breach. Technically any breach is supposed to be reported to HHS, but in reality, a lot of covered entities (e.g. insurers) fail to report smaller breaches (which, as a patient, should terrify you). The big ones, though, are really, really bad, and when reported, the consequences can be very serious and potentially even include serving time, depending on the circumstances.

The reason I can be so confident that this is a reportable breach is that the definition of PHI is so broad that even revealing the existence of information between two known entities can be considered protected information. Anything more specific, like a phone number or DOB, or time of an appointment (even if you don't know who the appointment corresponds to) - that's always protected. And Cloudflare certainly has many of those.


Well HIPAA wouldnt allow your https traffic flow unencrypted through a shared proxy right? This means cloudflare couldnt offer that feature, so they probably didn't?

Just think about the HIPAA document describing a single endpoint of dozens of sensitive datastreams, decrypting and then encrypting them all on the same machine, a machine that does some random HTML parsing for snippet caching on the side.

I don't see that passing review, but perhaps I'm naieve..


From their blog post: https://blog.cloudflare.com/incident-report-on-memory-leak-c...

"Because Cloudflare operates a large, shared infrastructure an HTTP request to a Cloudflare web site that was vulnerable to this problem could reveal information about an unrelated other Cloudflare site."

You don't need to be using this feature, or to be sending malformed HTML yourself - just to be in memory for this Cloudflare process.


Apparently I was incorrect, and HIPAA does not require protected data streams to be isolated from each other. Perhaps I was confusing some other (European) regulation. For HIPAA it seems to be sufficient to promise that everything is secure, that you have documented everything and that you know what to do when stuff goes wrong.

So we should see very quickly that Cloudflare knows what to do when stuff goes wrong.


Why isn't the cloudflare encrypted with HTTPS??


It probably was, but any encrypted data still exists in unencrypted form in the server's memory before it's encrypted and sent out over https. You have to have something to encrypt before you can encrypt it.

The memory leaked by this bug includes that pre-encryption data, which is what we're seeing here.

(At least that's my interpretation, computer security isn't quite my wheelhouse)


Does Cloudflare sign BAAs?


I've also been looking into the same question, and I don't see any external indication that they consider themselves a Business Associate as far as their policies go. I would argue, however, that CloudFlare is a BA by definition if an application is using any of the WAF or SSL proxy functionality.

We've been reaching out to a couple of vendors that do use the proxy functionality (given that the data spill could impact our clients as well). Hoping to resolve the BAA uncertainty in the process too.


Isn't it worse than that? Even if you are not a CF user, if your apps make calls to a third party site protected by CF, you could be at risk (stolen credentials, API keys), and could be attacked using those now.


That's also a bad thing, but you can roll creds and check if anyone has exfiltrated data from your various accounts. You can't roll patient identities. There doesn't appear to be any way to figure out which of your HTTPS pages served in last 6 months are presently publicly exposed.

I feel for folks who lost API keys -- really -- but everyone regulated should be in full-on disaster recovery mode right now.


If you are/were using Cloudflare to cache PHI though their CDN without a BAA, you were likely in breach before this.

Some have suggested that Cloudflare might not be a business associate because of an exception to the definition of business associate known as the "conduit" exception.

Cloudflare is almost certainly not a conduit. HHS's recent guidance on cloud computing takes a very narrow view[0]:

"The conduit exception applies where the only services provided to a covered entity or business associate customer are for transmission of ePHI that do not involve any storage of the information other than on a temporary basis incident to the transmission service."

OCR hasn't clarified what "temporary" means or whether a CDN would qualify, but again, almost certainly not. ISPs qualify, but your data just sits on the CDN indefinitely.

p.s. Hi Patrick and Aditya!

[0] https://www.hhs.gov/hipaa/for-professionals/special-topics/c...


Agree completely with you on this, and based on my experience with OCR, I'd say they would as well. The analogy for a "mere conduit" is the postal service. And that analogy falls apart as soon as you realize that CloudFlare, when being used as an SSL termination point, is opening and repackaging each "letter" on the way to the destination.

I do hate for CloudFlare to be the example for companies playing fast and loose with the rules, but I am hoping we'll have an opportunity in this to clarify the conduit definition a bit more.

Would like to mention that I don't think this declaration applies to every scenario. CloudFlare isn't just one service. I don't see an immediate issue using CloudFlare for DNS on a healthcare app. Neither do I see an issue using CloudFlare as the CDN for static assets. Both of these cases should be evaluated in a risk analysis, but they don't necessitate the level of shared responsibility a BAA entails.


I remember Tavis tweeted Friday night asking for a cloudflare engineer to contact him, and everyone joked that the last thing you want on a Friday evening is an urgent message from tavis ormandy.


That was my tweet believe it or not. I had to turn notifications off on my phone because out of nowhere it was getting bombarded with shares/likes...


I would say the crazy thing is a mere t-shirt as their "bug bounty" top tier award given how they've pitched themselves as an extremely secure service.

https://hackerone.com/cloudflare

I'm sorry but when the reward for breaking into you is basically a massive pinata of personal information...that simply is a bad joke. Security flaws are going to happen and if you aren't going to even offer a reasonable financial reward to report them to you, well, that is just begging to be exploited with a pinata that size.


Nah. Bug bounties don't work for services like CDNs. Maybe they do elsewhere. But for enterprise services, the noise rate is too high, and the very good bug finders are either salaried, free, or working for the adversary.


I think I'd need to see some sort of evidence of this assertion. Bug bounties are commonly offered across a huge variety of online services, and they get results...not always, not necessarily consistently high quality, but even the giants (facebook comes to mind) have had reasonably serious bugs found by people seeking bounties.


He's not wrong about the noise level. I conducted a survey of the most notable bug bounties in 2014 and found that the largest companies either have ineffective programs or quickly scale teams to handle inbound reports full-time. There are security engineers at Google and Facebook who spend a majority of their time responding to, and triaging bug bounty submissions.

That said, I disagree that bug bounties don't work for CDNs. You can scale a bug bounty up, it just requires resources. Cloudflare has those resources, and part of it is a function of the reward tiers you offer.


Bounty researchers aren't the only quasi-rational economic actors in this sytem. Cloudflare, we might surmise, get enough benefit from their bounty program that they're willing to pay for its administration costs and the occasional T-shirt, but they don't see value in spending more than that.

More than that, access to the service is actually the limiting factor for good bug bounty results. Cloudflare's bug bounty, we might surmise, works as well as it does because anyone can sign up for a Cloudflare account for free. For an enterprise CDN, who won't talk to a potential customer without the prospect of an $x0,000+/year contract, everyone who has enough access to the service to, in the general course of business, find and submit meaningful reports is employed by a customer, and likely prohibited from accepting substantial rewards. Everyone else either doesn't have enough access to submit meaningful reports, or the bug is so bad (like this one) that they'll report it regardless.

Arguably this shows that Cloudflare and other CDNs are right in their calculations: Tavis disclosed this bug to Cloudflare without promise of a payout, or even a T-shirt. Might some good Samaritan on the Internet have noticed the bug and reported it earlier if the bounty was more substantial? Perhaps. But in responding to a vulnerability of this magnitude, you want to work with someone of Tavis's caliber, who has the good of all the stakeholders in mind, not a profit-motivated rando.


I'll gladly offer some anecdotal evidence:

We've got about 2500 tickets in our ticketing queue that have been filed over the past 8 months (excluding spam). Out of those 2500 tickets, only five are valid issues, and only one came with an actual write up.

The signal to noise ratio is absolutely awful - and it's not uncommon for people with invalid issues to demand that you pay them regardless.


Wow, that's much worse than I would have guessed. I would have assumed 10:1, tops. We get security reports, and sometimes they ask for a bounty, and only a very small number are bogus (but we don't have a formal bounty program). Less than half of our security issue reports are totally bogus, and another quarter are theoretical issues, but result in some sort of clean up in the code (e.g. no one can figure out how it could be exploited, but it gets refactored anyway).

I've been meaning to try a formal bounty program, as our software is a high value target (administrative tool running on over a million systems), but we're Open Source and don't have a lot of budget for bounties or anything else. If it produced hundreds of reports for every valid issue, it'd be counter-productive, for sure.


The bounty prices won't be the problem. The constant negotiation over 100,000 different variants of unchecked redirection and login fixation will be the issue. Time is money.

Hacker One should rename itself The Institute For Advanced Redirect Studies. I'm only partly kidding: bug bounty submitters are good at redirecting. Way better than I was before I started handling bounties. There's an interesting epistemological discussion to have about the low-value-yet-severity:critical bugs people file on bounty programs, because the level of cleverness required to exploit URL parsing differences between platforms is no less than what it takes to get an XSS bug.


It sounds like your system might be a candidate for https://wiki.mozilla.org/MOSS/Secure_Open_Source.

There's a form listed under "How to apply", and an email address nearby.

It appears that projects are only documented once audited, FWIW.


> Nah. Bug bounties don't work for services like CDNs. Maybe they do elsewhere. But for enterprise services, the noise rate is too high, and the very good bug finders are either salaried, free, or working for the adversary.

Yes, running a real bug bounty system requires professional security engineers and a professional security posture to sort through the noise. However, when the sole product you are selling is security (i.e. Cloudflare) you kind of have to admit it should be expected that they do so.

It isn't "too high", it simply requires a serious financial commitment to security in the terms of salaried security engineers.

As to your other point, No one works for free. Project Zero is paid for by Google. Security engineers are going to prioritize the purposes that make them real, hard cash.


Here's a question: what's the trade-off in terms of return on investment between hiring salaried security engineers to administer a bug bounty and hiring salaried security engineers to find bugs directly?

Parent's claim, as I read it, is that it's a better use of an enterprise CDN's money to hire security engineers to find bugs than to administer a bounty. Seems plausible to me. Where's that line?


> Parent's claim, as I read it, is that it's a better use of an enterprise CDN's money to hire security engineers to find bugs than to administer a bounty. Seems plausible to me. Where's that line?

Depends on the company, but tbpfh, most security engineers in a group tend to have a culture and that culture creates common blindspots. The fact they weren't testing for this sort of issue (i.e. parser memory leaks) is an example of something that seems obvious to some people that others ignore.

Maybe that is just my experience tho.


Facebook and Google have bug bounties. That's pretty big scale.


Facebook and Google are not, at base, enterprise services.


What would make sense (to me, not a business/marketing guy, nor a lawyer, at all) would be a t-shirt and free subscription as the offered thing, something which costs the company nothing.

Then for anything like this, give publically a bonus gift which makes it worth people reporting to them and not blackmarket selling it. Once it's gone through the legal dept. and so on.

Then they can be very quick with handing out tshirts and so on to any and every microissue report, without the people running triage having to care about amounts or tax or whatever.

Having any kind of publically offered payment for service (beyond a tshirt bounty or services in kind) is just begging for legal issues, right?


> Having any kind of publically offered payment for service (beyond a tshirt bounty or services in kind) is just begging for legal issues, right?

https://hackerone.com/coinbase ($500-$10k) or https://hackerone.com/uber ($500-$10k) or https://hackerone.com/facebook ($500-$10k) or dozens of others have no trouble with it.


The reward includes a t-shirt, it isn't a mere t-shirt. You also get "12 months of CloudFlare's Pro or 1 month of Business service on us" (~$200). The reward is also not tiered.

The award may still not be all that much, but let's not make things up about them.


That's still pretty much as silly as a tshirt. When a vulnerability was found in my hobby project I paid 200 to the reporter as a thanks. From my own pocket for my own open source program.


If I needed CF Pro though I'd already be on it.

I mean I guess it's good if you're already on Pro and could do with the freebie year but it's not really much to get the whitehats auditing your systems for free*

* free unless they find something


> The reward includes a t-shirt, it isn't a mere t-shirt. You also get "12 months of CloudFlare's Pro or 1 month of Business service on us" (~$200). The reward is also not tiered.

I've never put any of my sites behind Cloudflare precisely because I never had faith their WAF would always be bug free and I'm not comfortable with their MitM position.

Getting me to use your service on a time limited basis falls more under the category of "try-it-so-you-buy-it" marketing ploy than a real bonus to me. It benefits Cloudflare more than the researcher for that reason since if they use it, they'll be invested continuing to "help" Cloudflare since they'll be dependent on it.

I'm sorry, I just don't buy that is anything but a marketing ploy wrapped up as a bonus.


Can someone tell me the implications of this in laymen terms?

For instance what does it mean "sprayed into caches"? what cache? dns cache? browser cache? if the latter, does it mean you are safe if the person who owns that cache is an innocent non technical iser?


There are caches all over the Internet; Google and Microsoft run some of them, but so do virtually every Fortune 500 company, most universities, and governments all over the world.

The best way to understand the bug is this: if a particular HTTP response happened to be generated in response to a request, the response would be intermingled with random memory contents from Cloudflare's proxies. If that request/response happened through someone else's HTTP proxy --- for instance, because it was initiated by someone at a big company that routes all its traffic through a Bluecoat appliance --- then that appliance might still have that improperly disclosed memory saved.


PINBOARD!!!!!!!!! (It's a web-crawling & caching service.)


There are all kinds of places were things are cached, both on- and offline. Your data may end up in:

* Browser caches.

* Sites like wayback machine or search engines that make copies of webpages and save them.

* Tools that store data downloaded from the web, e.g. RSS readers.

* Caching proxies.

* the list goes on and on.

I think what tptacek wanted to say: It's just so common that people download things from the web and store them without even thinking much about it. And all those places where this happens now potentially can contain sensitive data.


Many mobile providers cache heavily as well. In my country, Vodafone does this.


Many services on the internet keep a copy of a page they have loaded in the past. Google does this, for example. It lets them do things like search across websites quickly.

Many of these caches are available online, to anyone who wants to look at them.

This bug meant that any time a page was sent through Cloudflare, the requester might receive the page plus some sensitive personal information, or credentials that could be used to log in to a stranger's account. Some of these credentials might let a bad actor pretend to be a service like Uber or Fitbit.

This very sensitive information might end up saved in a public cache, where anyone could find it and use it to do harm.


What are my rough odds of having stored a credential,if I were a provider?

What are the odds I had a credential stored?

We know the impact but what are the odds to a provider and to a possible exposeee?


It's reminiscent of the earlier days of the Squid cache.

When it had bugs and devivered up cached files the typical symptom was that everyone in the company got unwanted porn.

Because the biggest user (by far) of the 'net was the person into porn and so 90% of the Squid cache was porn.


It served the wrong resource instead of failing to serve a resouce? Back then, if I were to suffer this, what is the likelihood of a porn for cats experience?


Far worse than this. Yes, browser caches, but also web crawlers (like google)'s caches. This means that anyone who requested certain public content could have instead received secret content from completely unrelated websites.


As for the SHA-1 collision mentioned by jgrahamc[1] earlier today:

How am I going to explain this to my wife?

Actually a serious question. How do we communicate something like this to the general public?

[1] https://news.ycombinator.com/item?id=13713826


"It's like some extremely popular remailer company accidentally put badly or barely shredded copies of handled letters into other people's envelopes. Strangers' sensitive info is potentially sitting inside unsuspecting mailboxes worldwide."


> It's like some extremely popular remailer company accidentally put badly or barely shredded copies of handled letters into other people's envelopes.

Or used as confetti for a parade: http://www.npr.org/2012/11/27/166023474/social-security-numb...


> A significant number of companies probably need to compose customer notifications;

As a one-man company who has never done this before (and to the best of my knowledge never needed to): Any guides/examples to writing a customer notification for security ups like this? Or just recommendations? Thanks.


It's as easy as throwing a red banner on your website that explains the situation briefly and recommends that users change their passwords, if you take this more seriously you can force a password reset for all users. Depends on how sensitive the information that your users trust your site to hold is.


Email your customers, telling them to change their passwords, and link to some info about the leak. (in case they don't visit your website and miss seeing the security alert banner)

Advise them to change passwords for other services too, list sites possibly affected: https://github.com/pirate/sites-using-cloudflare/blob/master...


What a mess.

On the plus side, all those booter services hiding behind the Cloudflare are probably being probed and classified/identified/disabled by competitors and probably FBI. That is good.


> This is approximately as bad as it ever gets.

*as bad as it has ever gotten so far.


>Tavis found it by accident just looking through Google search results.

Curious whether there could be some automated way of preventing such a widespread cache poisoning in the future. Some ML trained on valid pages from a given domain?

Is it even possible to recover the original content of the documents or was the data randomly inserted into different parts?


Step 1) MITM the entire Internet, undermining its SSL infrastructure, build a business around it

Step 2) leak cleartext from said MITM'd connections to the entire Internet

I recently noted that in some ways Cloudflare are probably the only entity to have ever managed to cause more damage to popular cryptography since the 2008 Debian OpenSSL bug (thanks to their "flexible" ""SSL"" """feature"""), but now I'm certain of it.

"Trust us" doesn't fly any more, this simply isn't good enough. Sorry, you lost my vote. Not even once

edit: why the revulsion? This bug would have been caught with valgrind, and by the sounds of it, using nothing more complex than feeding their httpd a random sampling of live inputs for an hour or two


>edit: why the revulsion

I'd guess it's because of the crude and reductive way you describe the service cloudflare provides. I don't know what type of programming you do, but many small services don't have the infrastructure to mitigate the kind of attacks cloudflare deals with and they wouldn't be around without services like this.

I don't like the internet becoming centralized into a few small places that mitigate DDOS attacks like this, but I like the alternative (being held ransom by anyone with access to a botnet) even less.

I'm going to take a more even handed approach than what you're suggesting. Any time you work with a service like this you risk these kinds of things - it's part of the implicit cost/benefit analysis humans do every day. I'm not ready to throw out the baby with the bathwater because of one issue. I'm not sure what alternative you're suggesting (I didn't see any suggestions, just a lot of ranting, which might also contribute to the 'revulsion') but it doesn't sound any better than what we have.


So rather than demand fixes for the fundamental issues that enable ddos attacks (preventing IP spoofing, allowing infected computers to remain connected, etc), we just continue down this path of massive centralization of services into a few big players that can afford the arms race against bonnets.

Using services like Cloudflare as a 'fix' is wrecking the decentralized principles of the Internet. At that point we might as well just write all apps as Facebook widgets.


When in a tactical emergency do not say "and why is this shit raining down upon us?"

That is a separate step. First you either take cover or help.


Problem most often is that after you take cover, you forget to ask that question.


Thats not true though


Interesting, so to make people stop thinking strategically and run the way I want, just throw shit at you?

Do you see a problem with that?


However, I haven't seen people enable ButtFlare's proxy only when under DDoS. Most of their users enable the proxying just for the CDN performance or just in case or… you get the idea.


Once your origin is under a DDoS attack, how would Cloudflare's proxy help?


Yeah, it wouldn't help if the attackers don't resolve the DNS hostname on ~every request :D But then, there are ways to find the origin anyway (when buttflare is enabled), someone in this thread posted the real IP address of Hacker News…


You stand up the service somewhere else, and point the cloudflare proxy at that.

Everyone in the "cloud" is able to do the migration even without having prepared a disaster recovery plan ahead of time.


>I'm not ready to throw out the baby with the bathwater because of one issue.

Extreme centralization of the Internet is not a "baby", except maybe in the sense of a cuckoo's egg.

But I'm willing to bet the mentality of this comment is highly representative of many web developers and service providers. They will not seek to fix anything, because they don't see this state of things as a problem in the first place.


How about... stop CLOUD THIS and CLOUD THAT.

Cloud means extreme centralization.

It means giving your data to a third party you don't control.

Why?

Why does our networked software have to assume a centralized topology?

In the days when developed countries had dialup, protocols (IRC, Email, etc.) were all decentralized. Today, all the famous developers live with fancy broadband internet connections and forgot what it's like to have to think about netsplits.

The result... all the software is either "online" or broken.

There shouldn't be an "online" or "offline". There should be "do I have access to server X currently?"

Why do we need Google Docs to collaborate on a document if we are all in the same classroom?

Why do we need centralized facebook server farms whose engineers post on highscalability how they enable us all to post petabytes of photos and comment to our friends?

Why do we need centralized sites to comment at all? Each thread is local to its parent.

Why does India need internet.org from facebook?

If communities could have a network that survives without an uplink to the outside world then DDOS from the global internet would just cut off that network's hosting of documents to outsiders. They'd still be able to do EVERYTHING locally - plan dinners, book a local appointment, send an email etc. and even post things out to the greater internet.

This is a future I want to see.

We already have mesh networks. We need more web based software to run these things.

That's what we are building at qbix.com btw.


Your why questions can all be answered by "It's cheaper than hiring a team to do it in-house". At the end of the day it's all about money and non-techy people are often the people in charge of the money.


Doesn't have to be. Services can be packed into easily deployable package. It's even easier now thanks to container technology.


I agree with you 100%.

Tim Berners-Lee, the "father" of World Wide Web, is currently advocating for exactly what you are asking for.

See: https://www.decentralizedweb.net/


lol, qbix.com connects to cloudflare.com


What are you talking about?


I visited the website you mentioned: qbix.com My add-blocker (ublock) blocked third party resources from cloudflare.com on that website. It's a funny fact after reading your comment in this thread.


Step 0) Obtain black funding from NSA budget to start and "VC invest" in a global CDN company...

(Now I'm trawling Crunchbase to see if I can work out which investors are NSA front companies, then I'm gonna look to see what _else_ them and their partners have invested in...)


Covertly get into a company that terminates ssl for half the internet, and... spill your precious secrets everywhere, instead of siphoning them off silently?


Plausible deniability? "How could we have known the flaw was exploited by NSA and FBI? We didn't know about the flaw at all!" When, actually, it was designed by NSA, before they created CF as an attack vector. Eventually the vuln is discovered as was inevitable, but because the caches were theoretically "public" no one notices all the drone strikes and parallel constructions correlated with CF use.

I don't actually believe that, but it isn't an unreasonable theory.


Not NSA, but the CIA funds and operates In-Q-Tel[1]. They've funded companies like Palantir and Keyhole (which became Google Earth).

[1] https://www.crunchbase.com/organization/in-q-tel


I should have done my research, but I walked away from an accepted offer at a company once I found out they took money from In-Q-Tel.


How do you find stuff like this in general? I would love to limit my business to entities I know haven't dealt with other entities I consider suspect, but I don't know how to actually do this filtering.


Could be a good topic for Ask HN?


"Step 0) Obtain black funding from NSA budget to start and "VC invest" in a global CDN company..."

I once came up with that exact concept for a nation-state subversion. It would even pay for itself over time. I kept thinking back to it seeing the rise of the CDN's and the security approaches that trust them.


Long been rumoured in the more paranoid corners of the web they are intelligence front/partners.


Of course they're intelligence partners, perhaps not wittingly, but Cloudflare was designed from ground up to be one of the most interesting targets for every intelligence agency in the world.

After the Snowden leaks it really seems nonsensical to give Cloudflare the benefit of the doubt and assume that they aren't compromised.


Am I misunderstanding that this would be useful for parallel construction, but that the public failure actually subverts the usefulness of Cloudflare as a MTIM partnering with someone?


They also actively deter Tor use. I've cancelled subscriptions with Cloudflare-hosted sites because they make securely and anonymously browsing their sites a pain.


I'm running a side-project on Cloudflare and it's accessible through Tor without problems. I suspect this comes down to the settings a site owner sets up in their Cloudflare interface. It would stand to reason if for example you applied the highest security setting across the board, Tor and VPN users would get presented with a captcha.


I have been presented with a captcha by cloudflare many times without using tor or a VPN. It is the best way to divert users from your website. My natural reaction is that unless I absolutely need to use this particular website, I move to the next result on google. Websites who use cloudflare are suicidal.


> Websites who use cloudflare are suicidal.

I think you are overestimating the amount of people doing their regular browsing through Tor


Again, I wasn't using Tor or a VPN


Is this made clear in their UI? Do they have something saying "this setting will screw over many VPN users" and "this setting will screw over with all Tor users"? If not, it's in large part their problem as well.


I would say it's "clear" in exactly the same way the privacy slider within the Tor browser is clear. Why not set everything to its max value always? Because there will be limitations arising from it. In the CF interface it's pretty obvious adjusting the settings in that way will increase filtering and captcha challenges for users.

I think the decision that goes on in the minds of most site operators is "fuck convenience and sleazy Tor users, I want my site to be as safe as they can make it".

It's worth noting that other reverse proxy providers I worked with when freelancing expose the very same controls to site owners. Based on anecdotal knowledge, I'd say anonymized users accessing a site behind CF are subject to less hassle than those accessing a site behind something like X4B with comparable settings.


It makes sense that they treat Tor like a probable adversary, but the cost analysis seems really flawed.

Sure, the proportion of requests passing through Tor are more likely to be malicious, but given the bandwidth constraints the adversary seems limited.

The costs aren't only the lost business from people like you, but people who should use Tor giving in. There's some wisdom to people even researching something as mundane as what their dog ingested using anonymized services, much less other medical questions.


CloudFlare is neither the first nor the biggest CDN. I can't recall Akamai having a hole this big. They're either more secure or better at keeping things quiet.


To be fair to CloudFlare, Google had a heap issue a few years back (maybe like 7 now) where internal flags and copies of argv (which Google use heavily for config) were clearly present in output from their HTTP frontends, including references to Borg before Borg was ever documented publicly.

Over in App Engine land, someone bypassed their JVM sandbox and managed to extract a copy of their JVM image, which included much of their revered base system statically linked into something like a 500mb binary.

Sorry, I'd have to go digging to find references to either of these incidents. At least in either case customer data wasn't leaking, but suffice to say it's a little bit of the pot calling the kettle black

And finally let's not forget the China incident, which rumour has it, resulted in a system compromise at Google right to the heart of their engineering organization. Of course they didn't get roasted like Yahoo recently did over their password leak



I'd like to see how much of a mess their argvs are


Launch Chrome on Linux and grep the ps output.


Step "What does secure mean anyway") SSL terminate even sites that are not sending data to Cloudflare securely


Yup, this made it crystal clear, years ago, that Cloudflare's business incentives were and are at odds with a secure web.


I don't buy this argument.

A site using Flexible SSL is no less secure than one using http://, and in fact is more secure, because nobody can MitM the connection between CloudFlare and the end user. The only thing vulnerable is the connection between the website and CloudFlare (~~and only to MitM, not to passive sniffing~~ EDIT: this isn't true, see [1]), but that's a much smaller and much better-protected surface area.

Now it's quite obvious that the alternative SSL options are much better because they secure the data properly the whole way. But claiming that Flexible SSL is somehow undermining the security of the web is extremely hyperbolic.

[1]: The connection between the origin server and CloudFlare can in fact be passively sniffed. I thought Flexible SSL was the option to use an arbitrary self-signed cert, but it actually means no encryption.


The only thing the end user has is the difference between http:// and https://. Cloudflare undermines that entirely. How can a user possibly ever know whether it's safe to enter their credit card number or medical information in a web form, in a world where CloudFlare "Flexible SSL" exists?


If a user thinks the presence of "https" means it's safe to enter credit card details or medical information, that's already a huge problem. Yes, "https" should be a prerequisite to entering sensitive information, but that's only part of it; the other part is whether you actually trust the server you're sending that information to. The server could be using ironclad encryption across the whole connection, but that doesn't mean they'll still handle your data safely. Any site that wants sensitive information like this has to do many things to ensure it's secure, and making sure they have a secure connection is only one of those things. If you trust that the server operator has done everything else necessary to keep your credit card details safe, then you should also trust that they're not using Flexible SSL.

Edit: Dear downvoters, can you please explain why you disagree? What I wrote really shouldn't be controversial in the least, so I don't understand the drive-by downvotes.


It's always fairly safe to enter credit card details, you can chargeback that shit, type it wherever you feel like and just claim ignorance when it goes poorly. That's basically the whole point of using a credit card and not your bank account where you're liable for at least some of the money taken.

No company is likely to handle your payment details completely securely. You're relying on it working out on sheer luck most of the time and chargebacks on the rest.


This is why PCI Compliance exists. Part of the requirements of PCI are that you must encrypt transmission of cardholder data across the network. So companies that accept credit card details while using Flexible SSL are presumably violating the PCI DSS. Companies handling small volumes use self-assessment, but larger companies are actually audited annually for this stuff.


It's unfortunate that the actual content of PCI is an incoherent and actively counterproductive mess.


A big part of that incoherence comes the fact that a lot of their guidelines are too broad. For instance, one requirement says all activity performed by an admin must be logged. How many financial companies do this today on every server/device in their PCI environments? My guess is nearly zero, because it's very difficult to find someone who knows what is needed and how to do it correctly, but very easy to avoid even being discovered as being out of compliance.

Then there's the whole lone-auditor thing where a very large data-center or three are being audited by a single person over the course of two weeks, or less. That person is absolutely bombarded with information about an environment that is foreign to them. The end result I think is that so far companies have had it very easy to get by. They only have to pay for a week, or two at most, and whatever limited findings they get are fixed and they move on to the next year.

If companies actually had to live with a slower and more methodical audit, there would be many more findings and a lot more money spent, both on the auditing process and the resulting cleanup. The upshot is this would drive actual innovation in the space of having proper logging, file integrity, encryption, access controls, etc.

The whole audit industry is just.. icky. It needs a massive overhaul and the financials need to be forced to pay for it.


Nice, good to know the credit card companies are doing their best to mitigate their liability due to issues like this too.


> Any site that wants sensitive information like this has to do many things to ensure it's secure, and making sure they have a secure connection is only one of those things. If you trust that the server operator has done everything else necessary to keep your credit card details safe, then you should also trust that they're not using Flexible SSL.

This is true, but conversely there is no legitimate use case for Flexible SSL. Having a datastore like Redis or MongoDB that by default listens insecurely on any address is almost as bad, and such things often compromise the security of a site if it e.g. sends your data across the internet to one of those, but at least there's a more-or-less legitimate use case for that default if it's used on a secured network - it's at least possible that someone using that default isn't deceiving their users. Whereas anyone using Flexible SSL is necessarily deceiving their users (I mean you can argue users might genuinely think "I don't trust my local cafe operator but I do trust the completely public, unsecured internet", but I don't think that's a coherent position for anyone to take).


The use-case for Flexible SSL is when you're not handling sensitive data but still want to offer https:// because really every website should offer it. In fact the blog post that introduced Flexible SSL (https://blog.cloudflare.com/easiest-ssl-ever-now-included-au...) said basically that. The whole point of the feature was it was a simple one-click way to go from http:// to https://.

That said, now that we have Let's Encrypt, and as more tooling gains support for automatically handling that, the value of Flexible SSL is going down, and I do hope they retire it eventually.


> The use-case for Flexible SSL is when you're not handling sensitive data but still want to offer https:// because really every website should offer it.

That's putting the cart before the horse. "Every website should offer" authentication and confidentiality, that's why we want every website to use HTTPS; having a URL that starts with https:// is not a goal in itself.


Flexible SSL still protects the user from being on an untrusted network, from having their ISP read and/or modify their traffic, etc. It's much better than bare http://.

Security is not binary, but you keep treating it like it is. Security is a continuum, and any progress you make towards perfect security is good.


> Flexible SSL still protects the user from being on an untrusted network, from having their ISP read and/or modify their traffic, etc. It's much better than bare http://.

I would strongly dispute the "much". If anything the local network is more likely to be trustworthy than the remote network - people keep talking about cafe wifi, but the user likely knows who's running the cafe wifi and can complain if they start injecting ads etc. Whereas the user has literally no idea who might be on the connection path between cloudflare and the website and listening in, MitMing or anything.

http:// versus https:// is inherently binary; there's no way to display a connection as http⸵:// . If it doesn't mean "encrypted while transiting the public Internet" at least then what does it mean?


Can you think of an existing system (let's go with websites) that meets your standards?


There isn't an automated system that can tell you whether it's safe to give data to a website, just like there's no automated system which can tell you a given vendor/service provider in general is reputable. All you've got is regulations, human-based reputation ranking, and public shaming.


Exactly this. Anytime you give sensitive information to another party you have to evaluate the risk. Having an insecure connection to that party is obviously risky, but that doesn't mean that having a secure connection means there's no risk. Companies that accept sensitive information while using Flexible SSL are probably mishandling your data in other ways too.


> All you've got is regulations, human-based reputation ranking, and public shaming.

Indeed - so we should be applying all of those against CloudFlare, and any other organization that offers or uses a "Flexible SSL"-like product, as firmly as we can.


You seem to be missing the point.

If the company is handling sensitive data, such as credit card information or medical information, there's already regulations to handle that. There's literally no point in trying to add regulations around Flexible SSL specifically, since the usage of Flexible SSL likely already contravenes the regulations for that sensitive data and therefore companies handling that data shouldn't be using it.

If the company isn't handling sensitive data, then again there's no point in adding regulations around Flexible SSL, because what possible benefit would that serve?

Flexible SSL is simply one tool that websites can use. It's intended to be used by sites that would otherwise just be using http://. Sites that do protect more sensitive information certainly could use it, but that would be a bad decision on their part. And we don't need regulations around it specifically, because there's also a million other bad decisions that company could make that would expose that data, and there's really nothing special about Flexible SSL that makes it in particular need of regulation.


Some information might be sensitive for the end user, but not legally protected. Even something as simple as their name or pseudonym can be serious for some people.

I think serving a site over https:// amounts to advertising that information sent to/from that site will not be sent unencrypted over the public internet, and users will use that when deciding what things are or aren't safe to enter into that site. Surely there are regulations that already apply to that? And in any case regulations are only one of the options you mentioned; we should be applying a lot more shame to CloudFlare and anyone who uses "Flexible SSL".


"Cloudflare undermines that entirely. "

In their defense, this is a flaw of the whole SSL/TLS security model. I think even Google did that before Snowden, presented you with https:// urls but proxied everything in clear text (they claim they don't do it now). Still, you can be pretty sure that many https websites might pass traffic in clear text to their backends and not necessary take security even a little bit seriously.


Google at least proxied everything over their own private fiber. Cloudflare proxies it over the public internet on a long route (since they terminate SSL as close to the client as possible).


Private fiber in other people's datacenters. Better I suppose, but not much.


Unencrypted over private fiber and unencrypted over the public internet are worlds apart.


That has nothing to do with using fiber vs internet though.

EDIT: Original comment said he could pull content off Google results. To respond to the new one:

No, they're not worlds apart when you're on the backbone. They still go through other people's datacenters and that's what causes the problem - we're not talking about stuff that goes over wifi or corporate networks here - we're talking generally just big ISPs in both cases.


> A site using Flexible SSL is no less secure than one using http://,

It can be, in several ways. Most critically, it stops browsers from detecting the connection as insecure and applying mitigations.


Beyond Secure cookies, what mitigations are you thinking of? Secure cookies don't count because serving Secure cookies over Flexible SSL is no less secure than serving regular cookies over http://.


In addition to limiting certain browser features to HTTPS sites, browsers now also warn about submitting passwords over HTTP and mark pages that do so as insecure.

Browsers also prevent HTTPS sites from embedding active content from HTTP sites.


Many browser features (like location API) are gradually being deprecated from plaintext HTTP.


Interesting. I hadn't heard of that before. Looks like it's just Chrome doing this?


And Firefox


Disagree. The point is that when people see that lock that tells you your connection is secure, when it's actually not, that causes more damage than if your connection was actually not secure (because then presumably you wouldn't be typing in credit card numbers and other sensitive info if you saw http:// in your address bar).



Yeah, if you're capable of MITMing traffic between CloudFlare and the server, you're most likely capable of stealing emails or HTTP requests to the server anyways and generating your own certificate for them anyways. It's a security loss, but probably a minor one.

The reality is, you're much more likely to get sniffed on public wifi or even your school or workplace network than someone running the server in a datacenter is, generally speaking if someone can sniff them at a DC they can do much more already. So it's still a respectably huge security gain for users.

And they do offer a good way to secure this connection too where you can do full SSL and use a certificate signed by them.

Would you be more comfortable if they offered another way to represent this to the browser? An X-Endpoint-Insecure header or something like that?


> Would you be more comfortable if they offered another way to represent this to the browser? An X-Endpoint-Insecure header or something like that?

Yes, definitely, _Cloudflare_ should own this and push it through. You know they won't though because that would inconvenience their customers.


I'd be more comfortable if they didn't lie about security to site visitors. "Configure a self-signed cert on your hosts so we can encrypt the traffic" is a low bar to clear.


To my sibling: the issue is that people can and do consider Flexible SSL "good enough", when it really isn't. It gets you the green lock and the warm fuzzies, but the page just isn't secure. A false sense of security is worse than no security, because no security at least is glaringly obvious.


But it is secure. It's secure against the user being on an untrustworthy connection, it's secure against their ISP deciding to MitM their traffic, and it's also ~~secure against anyone passively sniffing the traffic between the website server and CloudFlare~~ (EDIT: No it's not, see [1]). The only thing it's not secure against is someone in a privileged network position who can MitM the connection between the website and CloudFlare.

So no, it's not 100% secure, but it's far far better than having an unsecured http:// connection.

As for the green lock, you can blame that on Chrome. I have no idea why they insist on using a green lock and green "Secure" text for DV certs. Safari only uses a green lock / green text for EV certs, which is a lot better (and I don't know offhand what Firefox or Edge do). Of course, you could have an EV cert and still use Flexible SSL, but anyone who cares enough to get an EV cert should know better than to use Flexible SSL anyway, and there's a great many ways to make your server insecure, using Flexible SSL is very far from the worst way.

All that said, it would be great if CloudFlare would just stop offering Flexible SSL in favor of the self-signed CSR approach. Any CloudFlare customer who can create their own cert to talk to CloudFlare can also create a CSR to get a cert from CloudFlare just as easily, so it's not clear to me why they still even offer Flexible SSL.

[1]: I thought Flexible SSL was the option to use an arbitrary self-signed cert on the origin server. gkop pointed out that, no, Flexible SSL means no encryption at all.


Actually, it is worse than just using plain HTTP because it tricks people into believing their connections are secure. There is a significant and growing group of lay people who have been trained not to input sensitive data into nonTLS web pages. "Flexible SSL" effectively screws them.


> it's also secure against anyone passively sniffing the traffic between the website server and CloudFlare

How is it secure? CloudFlare allows you to send this traffic in the clear. If they required this traffic be HTTPS, that would be far better for web security.


My bad. I thought Flexible SSL was the option where you can use any arbitrary self-signed cert. But you're right, Flexible SSL means no encryption at all between the origin server and CloudFlare. I will edit my post accordingly.


What if the origin server forces https on the link between CF and the origin server?


That would be much better. Also Cloudflare gives an option to require HTTPS on this link. What's so sneaky about Cloudflare is they call the insecure option "Flexible SSL" rather than what it is, "Insecure SSL". And a major issue is that the end user has no way of knowing the site's Cloudflare configuration and whether it is secure or not.


There is absolutely no reason to use an EV cert other than to line the pockets of certificate companies. I have never once seen users actually check the details of an EV cert or freak out they have a regular https connection.

When observing non-technical users, I still see people clicking through blatant full page cert errors after connecting to WiFi because they've been implicitly trained that it's the captive portal making them sign in.


You're absolutely right. Cloudflare is a "global active adversary"[1] and has done irreparable harm to the internet at large. This is just a small taste of what's surely to come from CloudFlare's massive influence. They've shown that they cannot be trusted with everyone's data.

[1] https://trac.torproject.org/projects/tor/ticket/18361


"This bug would have been caught with valgrind, and by the sounds of it, using nothing more complex than feeding their httpd a random sampling of live inputs for an hour or two"

Or prevented using abstraction that do bounds checking. Or even just used ragel with a memory safe language and prevented all issues like that from ever happening. Probably would have been less work even with the reimplementation of an http proxy from scratch.


>with a memory safe language and prevented all issues like that from ever happening.

drastically reduced, but not quite ever. For instance, use a GC language, especially in this domain, you might do some data pooling to reduce GC overhead. Maybe you forget to clear data in the pool. Same kind of error can result.

But yes, I feel like security sensitive stuff like this shouldn't be done in C / C++ any more.


My first thought was relief, thank god I'm not using Cloudflare.

Where would you even start to address this? Everything you've been serving is potentially compromised, API keys, sessions, personal information, user passwords, the works.

You've got no idea what has been leaked. Should you reset all your user passwords, cycle all or your keys, notify all your customers that there data may have been stolen?

My second thought after relief was the realization that even as a consumer I'm affected by this, my password manager has > 100 entries what percentage of them are using CloudFlare? Should I change all my passwords?

What an epic mess. This is the problem with centralization, the system is broken.


We're compiling a list of domains using several scrapers and updating it here: https://github.com/pirate/sites-using-cloudflare

You can start by cross referencing your password manager with this list, and working your way out from there.


As an aside, I found this really interesting:

ashleymadison.com

ashleyrnadison.com

I find it really interesting that they registered that particular misspelling and they both point to the same servers. I can see doing this for some obvious domains like gogle.com, but the distinction there is simply that r+n looks like m.

Probably a really obvious answer here, but my guess is that they are trying to help people throw off the scent of someone browsing a history.


I think it's more likely that they bought the domain to prevent scammers from trying to bait users onto a fake site and enter login info, and since they have it why not redirect traffic.


You don't have to use scrapers, just use copies of the TLD zone files looking for cloudflare nameservers.



Huh TIL. Good call!


Not everyone uses Cloudflare for their proxying service. I use them purely for my DNS, but don't have the MITM proxy enabled at all. His scraping is a better idea probably.


> My second thought after relief was the realization that even as a consumer I'm affected by this, my password manager has > 100 entries what percentage of them are using CloudFlare? Should I change all my passwords?

Yes. Right now. Don't wait for the vendor to notify you.

> What an epic mess. This is the problem with centralization, the system is broken.

Yep.


My password manager has > 500 entries. Changing all the passwords....isn't going to happen any time soon.

If it only took 60 seconds per site, it would still take eight hours to change them all.

Might change a few key passwords, though. Couldn't hurt. I only have a couple of bank/financial passwords at this point. And my various hosting service access passwords.

Anything else is not worth the hassle -- and mostly would have 2FA anyway.


Your argument essentially revolves around "what are the chances I'll be compromised!?" rather than focusing on "What's the potentially affect of getting compromised" Most people with data or access rights which have several orders of magnitude of value relative to 8 hours worth of labor.

The decision to wear a seatbelt isn't driven by the probability of needing it, the decision is drive by the magnitude of exposure to an event where you would need it.


> Your argument essentially revolves around "what are the chances I'll be compromised!?" ...

You misunderstand. My argument is explicitly around "What is the potential effect?" That's why I listed changing financial passwords is on my list of things that I might do. (Though see below for why I won't.)

If I only change passwords where someone can do real damage (my primary social media accounts, my accounts that have a current, saved credit card, and any hosting-related accounts) then I've already hit the 98th percentile in damage avoidance. And as I pointed out above, most (all?) of those accounts are unaffected because they don't use CloudFlare at all.

If someone has stolen my password to the Woodworking Forums, and they ... what, post rabid alt-right spam in my name and get me banned? Oh well, either tell them that it was hacked, or if they don't believe me, let that account die and create a new one, if I ever decide to go back and post something again. No big deal. I haven't used it in years anyway, and I can create unlimited new (wildcard-based) email addresses on any of several domains I own.

Aside from the top 10-15 sites I use, I rarely have logins that are that important, anyway. So I'm totally basing this on worst-case damage assessment, not on "how likely it is I'm attacked."

AND...I just looked through all of the top sites I use, and according to the HTTP header, none of them is served using CloudFlare at all (I only checked the index page of each, but none have the telltale CF-Cache-Status headers). No financial sites, no shopping sites that have my credit card, no social media sites. So where's the fire exactly?


OK, I found ONE site that uses CloudFlare that I use regularly, and I've changed its password.

Which one is it? Hacker News.


In the case of seat belts that's probably because the cost of your life is infinity.

The same isn't quite true for my blogger account.


> In the case of seat belts that's probably because the cost of your life is infinity.

The cost of your life is much higher than your blogger account, but it's not literally infinite, even from your own perspective.

If it were truly infinite, then it would be irrational for you ever to take any action that were not 100% motivated by the desire to protect your life. (Not just "never take any risks", but literally irrational not to actively spend every waking second solely on that goal).


Lastpass knows how to change your passwords for many popular sites, and can automate it away for you.


I have been reluctant to use a service that keeps my passwords for me in the cloud.

Instead I'm using KeePass. KeePass is open source and has its "full stack" of encryption available for review. For LastPass I need to trust they're doing everything right, and that a government actor hasn't asked for some kind of backdoor. It's so easy to screw up security that I'm more comfortable trusting two levels of security: That KeePass has its encryption done right, and that Google Drive keeps my KeePass file out of the hands of bad-guys.

LastPass would become a single point of failure compared to what I'm doing: They just need to make one mistake and suddenly any bad guy gets all of my passwords.

Nice feature for LastPass, though.


LastPass uses local encryption to enable LastPass to have Zero knowledge of users passwords. This means that user's passwords aren't passed in the clear even inside a TSL session.

So LastPass isn't the password manager mentioned in the post.


But the server might have sprayed out your login credentials while travelling through Cloudflare.


I think he's recommending it, more so than assuming it's what he uses.


You use 500 sites which use 2FA?


No, the ones I consider to be "important" have 2FA.

When I log into the Woodworking Forums, I have to use a password. If someone steals my Woodworking Forums authentication and posts as me there, um....Oh well. Sucks, and I'll clean up the mess.

Glancing through my password vault (kept in KeePass, for those wondering) I have some in there that I literally haven't used since before Cloudfare was founded, like the Creative Labs developer site.


Note that for sites like HN, changing your password doesn't expire other sessions. You have to go find every browser with an HN cookie and logout.

(Where I mean some other sites that are not at all HN, but might plausibly exist.)


No, we log you out of all HN sessions when you change your password.


Oh, cool. (This was not the case last time I did a reset.)


I'm pretty sure kogir came up with that one and he's been off working on his bug tracker for a while now.


How do you check if a website uses cloudflare ? Any scripts that do that ?


Response headers will contain a "cf-ray" header or "server: cloudflare-nginx"


Both should be there, as well as 'Set-Cookie: __cfduid=...'

  $ curl -I okcupid.com
  Set-Cookie: __cfduid=...
  Server: cloudflare-nginx
  CF-RAY: 335f033b77742b76-AMS
EDIT: Better yet, make that 'curl -IL domain.com' to follow redirects because it may not show in the first response.


There is no reliable way to check. The problem is that even if you verify that a site isn't using CloudFlare now, that doesn't mean that they didn't use it in the past (and you'd still be affected).

In other words: Just assume that everything has been compromised. With how much of the web CloudFlare controls nowadays, you're not going to be far off anyway.


Icon lights up if the current site is on Cloudflare proxy.

https://chrome.google.com/webstore/detail/claire/fgbpcgddpmj...


$ host -t NS digitalocean.com

digitalocean.com name server walt.ns.cloudflare.com.

digitalocean.com name server kim.ns.cloudflare.com.


That may not necessarily work. Example:

  $ host -t NS okcupid.com
  okcupid.com name server nameserver2.okcupid.com.
  okcupid.com name server nameserver1.okcupid.com.
But if you check the response headers you'll see 'CF-RAY:...' and 'Server: cloudflare-nginx'



$ dig example.com

to get the A Record, then

$ whois 1.2.3.4|grep Cloudflare

Not 100% reliable, but should do the Job.


    whois $(dig +short yoursitehere.com) | grep -i 'Cloudflare' 1>/dev/null; if [[ $? -eq 1 ]]; then echo 'Didnt find CloudFlare'; else echo 'Found CloudFlare'; fi
Not at a terminal now, but this long one-liner should work.

Like you said, not 100% reliable though. For example, I'm pretty sure Reddit uses CloudFlare, but their whois mentions Fastly, which is a competitor.


We moved off of CloudFlare to Fastly before this vulnerability.


Confirmed, reddit.com was removed from the list. My mistake for not double checking this one.


If you find any domains with this please add them to the list:

https://github.com/pirate/sites-using-cloudflare


I know it's kinda late,but there is one more way to find if a site is using Cloudflare

append /cdn-cgi/trace to the URL and you will some debug info

Ex:

https://cloud.digitalocean.com/cdn-cgi/trace

https://news.ycombinator.com/cdn-cgi/trace



So it's fixed, then? (I haven't read the article yet.)


No, nothing is fixed. The leak has been plugged, but the water damage (and partly the water itself) is still there.


Weird. I read the Cloudflare blog entry (<https://blog.cloudflare.com/incident-report-on-memory-leak-c...) at the bottom of the linked Chromium bug tracker page and they make it sound like it's fixed (the implication being that now would be the time to change all my passwords…)


The problem is fixed in that as far as we know no new data is being made public.

...but since this bug has been out in the wild since perhaps 2016-09-22, now is indeed, the time to go and reset your active sessions and change all your passwords.


> You've got no idea what has been leaked

If your site is served through Cloudflare, assume it's all out there because it might be. Standard Big Red Button(tm) procedure.

I don't run any particularly impressive sites but I'll be resetting passwords today. Also cycling things I use behind Cloudflare like DigitalOcean passwords/API keys.

It's supposed to be read-only Friday, Cloudflare :(


I won't take the initiative of changing passwords, and I will only be doing it for services that ask me to do it.

In my opinion, if my accounts get compromised because the provider uses Cloudflare and leaks my data all over, it's their fault, not mine... It's not my job to guess which services are using Cloudflare, which ones were affected... and further, if my account gets compromised, others presumably will.

(PS: Of course you may need to change passwords if you reuse passwords from one service to the other, but obviously you shouldn't be doing that in the first place.)


If someone runs a red light, broadsides you while you're in the intersection, and leaves you paralyzed... it is their fault both morally and legally... but it still sucks to be you since you bear the consequences regardless of fault.

While this event is orders of magnitude less severe than my example, depending on the service that could be compromised there can be sufficient repercussions that you could not be made whole or avoid on-going inconvenience through the legal system or other acts of the genuinely responsible party.

I absolutely get and sympathize with where you're coming from... but you may want to check a few of your more important accounts none-the-less :-)


The damage is still yours even if its not your fault


I second this, and you'll be even more furious if someone used that to compromise your data/accounts.


They said https never broke, so if you were doing things right way you should not be affected at all. Do not overreact.


Wrong. You need to re-read the disclosure.


TL;DR for the lazy ones:

> The examples we're finding are so bad, I cancelled some weekend plans to go into the office on Sunday to help build some tools to cleanup. I've informed cloudflare what I'm working on. I'm finding private messages from major dating sites, full messages from a well-known chat service, online password manager data, frames from adult video sites, hotel bookings. We're talking full https requests, client IP addresses, full responses, cookies, passwords, keys, data, everything.

This is huge.

I mean, seriously, this is REALLY HUGE.


I don't get it. How is this info leaked? From the blog posts, it seems that "only" the HTTP Headers are being leaked and somehow being crawled by Google? But since when does Google store HTTP request info? Can someone explain?


Headers (among other sensitive stuff) were being leaked inside document bodies.


So just to clarify: some bug makes Cloudflare leak the HTTP Headers into the HTML being served and those HTML pages containing sensitive Info got cached by Google (and others)?


Yes. Think of it this way.

You have a function that strips all colons from your input. For some reason - in certain cases - your code misbehaves and when you are replacing the colons with an empty character you accidentally replace that colon with other data you have in the memory. So now all the colons in your input have been replaced with data that you shouldn't have touched. So now whoever sent you an input, gets back that input + more data they shouldn't be able to see.

And Google in this case caches those output strings.


@homero (since I can't nest a reply any further), it's not the contents of the crawler's request that gets randomly injected into the page that the crawler requests, but rather the contents of other requests to the same Cloudflare server.

Imagine I'm having a chat on some website X, which uses Cloudflare. Cloudflare acts as a man in the middle, meaning my request, and the response, likely pass through its memory at some point to allow me to communicate with X.

Later, a Google bot comes along and requests a page from site Y. Because of this bug, random bits of memory that were left around on the Cloudflare server get inserted into the response to the bot's request. Those bits of memory could be from anything that's gone through that server in the past, including my conversations on website X. The bot then assumes that the content that Cloudflare spits out for website Y is an accurate representation of website Y's contents, and it caches those contents. In this way, my data from website X ends up in Google's cached version of website Y.


But how is Google getting headers from the users of the sites, it should be from their crawler


If I (user A) access upwork.com (I just saw this on the list of affected websites, so it's not meant to be an ad), I am sending them my headers. Let's say my headers and other data are saved in M1 (memory register 1).

Then Google accesses the website as the crawler (user B), and their header and data is saved in M2. However, Google triggered a bug and now has access to M1 as well. So now Google sees their own headers + my data + other garbage.


Imagine this—Google sends a request to get data from malformedhtml.com for crawling purposes. This site's html happens to have that weird incomplete tag problem they mentioned. This site is served by Cloudflare, wherein a buggy script manages to insert some data from the server's memory into the HTML that it returns to Google. Now this data in the memory contains HTTP request headers etc. of _completely unrelated websites_ that are also behind CF.

Google gets this HTML and caches it and that's how it ends up there.


Yeah.

"We leaked information from Customer A to Customer B by accident" is the first order problem.

But the existence of web caches means that all that private information of customer A is potentially fucking everywhere now.

How do you even clean this up? How do you even start?


They leak uninitialized memory contents into the HTML being served; that memory could (and did) contain data from any other traffic that passed through their hands.

So a request sent to Cloudflare customer A's site could return data from Cloudflare customer B, including data that B thought was only being served via https to authenticated users of B.


Not just headers, basically random memory dumps that could contain anything that Cloudflare saw (which is almost everything). Passwords, certificates, you name it.


Essentially. Any headers from any site routing through cloudflare could get injected into the body of a second site's page if that second site was using the obfuscation feature. Those "mis-stuffed" pages could (and were) then cached by, among other things, crawlers like those operated Google and Bing.

Apparently 7xx sites had this enabled, but that affected 4000ish other sites that happened to be on the same infrastructure.


Near as I can tell, the HTTP Headers from one site are being included in HTML of other sites...


Cloudflare handles SSL for a lot of sites. It decrypts everything and passes it along.

For certain other sites, with malformed html, there is a bug that caused it to grab random data (headers and body) from memory and include it in the body of the response HTML. (Some html rewriting product that cloudflare offered was broken and it ran on the same servers.)

This stuff got sent to peoples browsers and also to web indexers like Google or Bing.

Google lets you search for stuff and will also show you the original page that it scraped, making it easy to find this data.

Edit: Also you may be seeing more headers in examples because headers are easier to search for.


Everything is leaked, headers are common but full plaintext is leaked in some instances.


HTTP Headers were being including the http response bodies of other random websites. Those websites were being crawled and cached.


requesting a page with a specific combination of broken tags, when done through cloudflare, will cause neighboring memory to be dumped into the response. op suspects this is due to a bounds checking bug on a read or copy. one can imagine this can be potentially kilobytes of data in one go.

since anyone can put a broken page behind cloudflare, all you need to do is request your own broken page through cloudflare, and start collecting the random "secure" data that comes back.


Just deleted my LastPass account - have been converted to KeePass for over a month.


Did Lastpass use Cloudflare? That would be a disaster.



It does not appear that LP was using Cloudflare. Even if they were, at most, your master password is all that leaked. All of their encryption and decryption is done on the client.


Only my master password? That seems kind of important.


I think he got it mixed up. Only the database would be leaked, but not the master password. However I understand the the clientside decryption is done when using their addon, but I'm not sure how it works if you go via the webportal.


> The greatest period of impact was from February 13 and February 18 with around 1 in every 3,300,000 HTTP requests through Cloudflare potentially resulting in memory leakage (that’s about 0.00003% of requests).

1) From the metrics I recalled when I interviewed there, and assuming the given probability is correct, that means a potential of 100k-200k paged with private data leaked every day.

2) What's the probably that a page is served to a cache engine? Not a clue. Let's assume 1/1000.

3) That puts a bound around a hundred leaked pages saved per day into caches.

4) Do the cache only provide the latest version of a page? I think most do but not all. Let's ignore that aspect.

5) What's the probably that a page contains private user information like auth tokens? Maybe 1/10?

6) So, that's 10 pages saved per day into the internet search caches.

7) That's on par with their announcement: "With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory. Those 770 unique URIs covered 161 unique domains." Well, not that we know for how long this was running.

8) Now, I don't want to downplay the issue, but leaking an dozen tokens per day is not that much of a disaster. Sure it's bad, but it's not remotely close to the leak of the millennia and it's certainly not internet scale leak.

9) For the record, CloudFlare serves over one BILLION human beings. Given the tone and the drama I expected way more data from this leak. This is a huge disappointment.

Happy Ending: You were probably not affected.


This assumes that the Bad Guys hadn't noticed the bug before Tavis, and hadn't started intensively mining Cloudflare for data.


Intensive mining indeed, if it's true that it requires 3.3M requests to get a page leak.

With a fixed 100Mbps connection and assuming 2kB per HTTP request-response, you can hope to get one leak every 11 minutes and 6.6GB of traffic, which is a constant 5k requests/s.

Maybe if Google reassigns all its SHAterred ressources to doing that...

... and then I realize that we were talking about cloudflare and my mining bot a capcha.

---

edit: correction. The bug was affecting only some pages with some content filtering options enabled, and was more prominent under some specific circumstances.

Hence why it only happens 1/3.3M in average. An attacker could allegedly leak data much more reliably if he was able to identify the patterns that are more likely to trigger leaks.


Couldn't an attacker construct a page that triggers the memory leak and just keep accessing that page to get different pieces of memory?


Yes. Sign up for service, configure a page with crafted invalid HTML at your origin, activate all three buggy features, and spam it with requests.

If you can find such a page already, just jump to the last step and avoid signing your work.


That's what was observed on the Cloudflare end. Without the multiplicand of how many pages Cloudflare served in a given amount of time, you can't determine the impact. Assuming that affected sites were affected en masse, a targeted attack from a connection would be minuscule compared to the pages Cloudflare serves.

Cloudflare is serving up more than 100Mbps; the attacker only has to zero in on what's fruitful, which yields something far higher than the 1 per 3.3M Cloudflare sees serving millions of innocuous requests.


Mid 2016 they were serving 4M requests per second.


Thank you. This is exactly the missing piece of information that everybody should be aware of.


Botnets


I hope something like intense mining would've caused a detectable spike in turn alerting CF sooner. Looks like it didn't...


Probably not. Cloudflare is designed to automatically swallow large increases in traffic.


I think your estimates fell apart at step 2, 1/1000 pages being cached. HTTP is aggressively cached, on many different layers. I'd put it closer to 1/10.


I meant cached by a public service like Google Cache Bing Archive.org that expose the pages.

A browser cache might be 1/10 but that's not open.


There's a lot of stuff in between those two extremes. As someone that operates an HTTP accelerator and caching server.


People are going to lambast CF for downplaying the impact, and there could be merit in that.

However, I really want to say I am absolutely impressed with both Project Zero AND Cloudflare on so many fronts, from clarity of communication, to collaboration, and rapid response. So many other organizations would have absolutely tanked when presented with this problem. Huge kudos for CF guys understanding the severity and aligning resources to make the fixes.

In terms of P0 and Tavis though, holy crap. Where the heck would we be without these guys? Truly inspiring !


CF's infosec team is very, very good at their jobs.


Then why are they talking about that 3000-ish number instead of the 7 million number?


I assume their writeup got filtered through PR and Legal.



To be fair there's a timeline in the post.


Obviously not, right?


They're human too. Look at the response times!


Yea but... seems like a quick run of valgrind would have caught this


Personally, I've never had an experience with valgrind that could be reasonably characterized as "quick". But YMMV.


Not necessarily.

If they just keep reusing a buffer and forget to clear it in between requests there is nothing automated that would find it.

Bounds checking languages would not help either - they would only work if they delete and reallocate the buffer on each request, since that's slow it's unlikely anyone would do that.

They probably wouldn't even clear the buffer, instead they rely on keeping track of the length of data in it, so any errors in there would be a problem.


If they used asan/msan and its support for manually marking regions of memory as invalid/uninitialized, that could have caught such cases too.


Application security team? Probably needs work.

But their overall response to this was still good, and very quick given the scale of the issue.


From Twitter:

"@taviso their post-mortem indicates this would've been exploitable only 4 days prior to your initial contact. Is that info invalid?" - https://twitter.com/pmoust/status/834916647873961984

"@pmoust Yes, they worded it confusingly. It was exploitable for months, we have the cached data." - https://twitter.com/taviso/status/834918182640996353


From my blog on this:

    The three features implicated were rolled out as follows. 
    The earliest date memory could have leaked is 2016-09-22.

    2016-09-22 Automatic HTTP Rewrites enabled 
    2017-01-30 Server-Side Excludes migrated to new parser 
    2017-02-13 Email Obfuscation partially migrated to new parser 
    2017-02-18 Google reports problem to Cloudflare and leak is stopped


With respect, the blog post buries the user with details. In my opinion, there should have been in bold at the top something like:

Title: Security report on memory disclosure caused by Cloudflare parser bug

(This is a security report, "incident" underplays this. Memory leak sounds a lot more innocuous than memory disclosure).

Data from any website that was proxied via Cloudflare since September 22, 2016 may have been leaked to third parties via a bug in Cloudflare's HTML parser. Operators using Cloudflare should:

* Invalidate session cookies

* Reset user passwords

* Rotate secrets

* Inform users that private data (chats, pictures, passwords, ...) may have been inadvertently leaked by Cloudflare.

* ...

Users using websites proxied by Cloudflare should:

* Reset their passwords

* Log in/out of sessions to remove session tokens

(Begin rest of post)

Last Friday, Tavis Ormandy from Google’s Project Zero contacted Cloudflare to report a security problem with our edge servers. He was seeing corrupted web pages being returned by some HTTP requests run through Cloudflare. ...


Well fuck. I have no idea what (if any, or all) of my authenticated web sessions have been going through CloudFlare in the last 6 months. How do I even start to protect myself from this?


1. rotate passwords, tokens, auth stuff on any and all service you use that may have used CloudFlare in this time period (as of time of writing this list has not been enumerated)

2. hope that no personally-identifiable info or damaging plaintext that can be tied back to you has been exposed, but you will probably never know for sure

3. join class action lawsuits if you so desire and receive the chump change that is your share once they inevitably get settled

4. ponder what it truly means to willingly (or unknowingly) give information to or through a "trusted third-party" who may employ other "trusted third-parties"

5. languish in unsatisfactory answers and outcomes, return to step 2.


I've compiled a list of 7 million+ domains that use Cloudflare here: https://github.com/pirate/sites-using-cloudflare

Including the subset of the Alexa 10,000 that use Cloudflare in the README.


Here is also a non-exhaustive list of websites using cloudflare: https://index.woorank.com/en/reviews?technology=cloudflare


Reset everything you don't want to assume is public


not trolling, I followed your HN profile link: what blog post? http://blog.jgc.org/


The CloudFlare blog entry [1] authored by him, also submitted by him to HN [2], and also posted by him in this thread [3].

[1] https://blog.cloudflare.com/incident-report-on-memory-leak-c... [2] https://news.ycombinator.com/submitted?id=jgrahamc [3] https://news.ycombinator.com/item?id=13718752#13718782


> One of the advantages of being a service is that bugs can go from reported to fixed in minutes to hours instead of months. The industry standard time allowed to deploy a fix for a bug like this is usually three months; we were completely finished globally in under 7 hours with an initial mitigation in 47 minutes.

Great, that makes me feel so much better! I'm sorry, don't try to put a cherry on the top when you've just leaked PII and encrypted communications.

Additionally, most vendors in the industry aren't deployed in front of quite as much traffic as CloudFlare is. It's a miracle that ProjectZero managed to find the issue.


>Cloudflare pointed out their bug bounty program, but I noticed it has a top-tier reward of a t-shirt.

Considering the amount and sensitivity of the data they handle, I'm not sure a t-shirt is an appropriate top-tier reward.


Not only that, but the "reward" in the program is laughable and frankly insulting to any serious researcher considering the scope of CF. Bug bounty platforms are already becoming the fiverr of ITSEC (that's not a good thing), CF just made an extra effort do diminish the value for researchers.

Management: "Why do we offer $5k for a small bug again? Look at CF, they don't offer any money!"


> "Why do we offer $5k for a small bug again? Look at CF, they don't offer any money!"

Answer: "Because if they had set up a bounty of $50k for security issues, they'd had thousands of researchers/students/white hats etc. watching the output of their servers."


"...and could maybe avoid or lessen the impact of this fiasco."


I don't disagree.

But, Taviso is probably contractually prohibited from accepting money from CF as a Google employee. Many large companies have 'outside activity' clauses and Google seems to be paying him already for that.

However, it will affect others whom are fully freelance.


If serious researchers are looking to get paid, I think bug bounties are the wrong approach entirely


It's about payoff * probability.

Let's say I (an idiot, but knowledgeable enough) stumble upon a serious vulnerability in Google.

Option 1: I could try to sell that on a darknet market for a decent amount of money. State actors, hacker groups, lots of people want to pay for such things to exploit. But, I might not get paid very much, I might get screwed over, I might go to jail, who the heck knows, I'm playing with a bit of fire here. Could make a good pay day though.

Option 2: Google offers a bug bounty that is known to pay well. It probably offers guidance on how much my exploit is worth. They'll almost certainly pay. And hey, no one gets exploited, which most people feel is a good thing.

Value = payout * probability. If bug bounties pay well, option 2 has a higher value most of the time. But if a company offers t-shirts, or is known for screwing over the discoverer, the perceived value falls quickly.

That's why companies who take security seriously pay good bounties, loudly and publicly.


> I might go to jail

Is selling exploits illegal? If so is selling them to google also illegal?


You're not so much selling them to google, you're disclosing them.

It's more of a contractual agreement between you and Google, or whatever company you're reporting the vulnerability to.

As long as you follow the rules for their bug bounty, you'll be fine.


Telling Google about exploits in Google services in exchange for money is not illegal.

Telling them about exploits in other services in exchange for money might be, depending on context.

Your parent was talking about the former case.


> Is selling exploits illegal?

Maybe. If the FBI decides to build a case against you for it, I'm sure they could find a law to use.

> is selling them to google also illegal?

I'm disclosing, and Google is granting me a reward. There's... Some difference I'm sure.


Why? Many can help find problems without having to be full-time, that's the point of crowd-sourcing with payouts.


Because you'll make much more working for people who specifically hire you instead of doing a bunch of risky work on spec.


The point of bug bounties isn't to attract the interest of people who are working to find bugs. It's to make sure that if someone is finding bugs for fun or stumbles over bugs by accident, it's worth their time to report the bugs.


  >>  top-tier reward of a t-shirt.
A t-shirt still seems entirely too small; closer to insulting than motivating.


Sure. I was talking about the general purpose of bug bounties, not the specific value.


An actual pentest would include (I'm assuming) all sorts of NDA's and legal contracts and stuff, all fine if you work in the industry but if you're a bored hobbyist like me, bug bounties are a fun way to try and make a few dollars.


A lot of pentesters make good money off bounty hunting. Some months they make more money off hunting than they do their day job.


I got a t-shirt from cloudflare, and all i did was tell them "please send me a t-shirt" - they shipped it halfway across the world as well, for free! (it didn't fit...)


Good to know the security of their users is worth a t-shirt


I never really got this argument. Is it not much better than the majority of companies that have no bug bounty and where the reporter needs to worry they will be met with legal threats instead of a t-shirt?


Friendly reminder that Cloudflare willingly hosts the top DDoS-for-hire attack sites, and refuses to take them down when they are reported.

Run WHOIS on them, it's almost 100% behind Cloudflare: https://www.google.com/#q=ddos+booter

I would be less concerned about the fact that Cloudflare is spraying private data all over the internet if people weren't being coerced into it by a racket.

We won't have a decentralized web anymore if this keeps going. The entire internet will sit behind a few big CDNs and spray private data through bugs and FISA court wire taps. God help us all if this happens.


>Friendly reminder that Cloudflare willingly hosts the top DDoS-for-hire attack sites, and refuses to take them down when they are reported.

Why should CF be required to police the internet? CF doesn't even host them, they just protect their sites from DDoS and DNS.


Cloudflare has spent a lot of time gaslighting people into believing this, but it physically, scientifically, OSI model-y isn't true. Cloudflare hosts web sites. When Cloudflare CDN edges that content, that content exists on their servers. Just because the canonical store is on another machine doesn't mean they don't host the site. If I mirror a site from some other server, and you're loading that site from my server, I'm the one hosting that site. That's how HTTP works.

The argument that they don't know what's hosted on their network has also been demonstrated by evidence as nonsense. The reason the Pirate Bay got blackholed by Cogent last week was because Cloudflare was grouping all of the BitTorrent sites on their network onto a single IP address, and a Spanish court order related to a different site ended up BGP blackholing over two dozen torrent-related sites as collateral damage.

http://seclists.org/nanog/2016/Jul/400 https://mailman.nanog.org/pipermail/nanog/2017-February/thre...

Cloudflare is completely capable of enforcing this, yet they don't do anything about it. It benefits them financially to not do anything, because they get business from these DDoS attackers trashing other networks on the internet, making it so you can only have sites stay up if they are hosted by Cloudflare's broken, bleeding servers.

This is fundamentally an extortion racket. Frankly, it should be a crime. This is exactly the kind of problem laws exist for.


It's not the responsibility of anyone except the police to police those sites. Cloudflare aren't providing those attack sites with an attack vector, they are just serving their webpages. The post office isn't responsible for policing blackmail letters sent through the mail.


The theory that Cloudflare only enforces against sites they receive court orders for is yet another argument that is not backed by evidence. They actively take down phishing attacks, without warrants or court orders. Presumably because if they didn't, Google would shitlist them in pagerank. They behave responsibly and morally when it benefits them financially, and tell everyone they need court orders when it doesn't, even if that decision hurts the web.

It is everyone's responsibility to be responsible members of the internet community. Just because they've found a temporary legal loophole does not give them a moral blank check to be complicit in the murder of the Internet's ability to function.


The morality of hosting the sites of jerks is not nearly as objective as you claim. I could make an argument that they behave morally by treating everyone equally, but they make an exception and perform immorally with phishing sites because google would punish them.

But the real answer is a lot simpler. The DDoS sites are not doing the DDoS through cloudflare. The phishing sites are doing the phishing through cloudflare.

And exposing some DDoS sites to DDoS is not going to fix the root problem. People will still sell DDoS services, and people will still put insecure devices online to become part of botnets.


But it sounds like in the absence of laws, you want private companies deciding what is allowed to be on the internet.


If you really want there to be a nightmare situation where private companies decide what gets to be a web site, just let Cloudflare keep doing this. You'll be left with a centralized internet run by 3 US-based CDN companies that only supports HTTP.

But yes, I absolutely do want private companies to make decisions like this. If Google didn't do this constantly, my search results would be a bunch of spam, scams and phishing attacks.

Requiring the police to get involved every time something bad happens (like a new phishing site) would be the end of the functioning internet and of our ability to enforce laws. Internet tech companies are absolutely expected to behave responsibly on a private level, and are given a lot of legal leeway on the assumption by the government that they will.


"CF doesn't even host them, they just protect their sites from DDoS and DNS."

The #1 excuse people use. They do more than just DNS, they deliver the actual data, that would have been delivered by the original host, to visitors. So I'd consider them hosting an automatically updated mirror, and as bad as the original host.


Related story:

I used to use Cloudflare for DNS, but I left because I was becoming uncomfortable with their policy regarding DDoS attack sites. We run our own Anycast CDN now for the HTTP, but I didn't want to have to deal with the DNS servers so I outsourced it to DNSimple.

Turns out that DNSimple unknownst to me started using Cloudflare's DNS servers under the hood. They were getting attacked by the DDoS attack sites Cloudflare hosts and it was threatening the service. I figured this out by doing a lookup of their nameserver IPs.

So my attempt to get away from using Cloudflare has meant that I'm just right back on Cloudflare's servers, again.

This is an insidious cycle that will not end well for the internet, or for our freedom on it. The internet will not be decentralized anymore if the entire thing sits on Cloudflare and depends on Cloudflare to function. Cloudbleed is a canary in the coalmine.


Note that if Cloudflare didn't have the content of those sites and their requests in memory this couldn't have happened.


They are charging us money to protect us from the same people they are protecting? Genius.


This comes around to me as something that just shouldn't have happened. CloudFlare are pretty big on Go, as far as I can tell (and I guess Lua for scripting nginx). Why was this parsing package written in a non memory-safe language? Parsing is one of those "obvious" things easy to mess up; the likelihood of a custom, hand written parser being buggy is pretty high. If it's somehow understood that your library is likely to have bugs, why do it in C/C++, where bugs often lead to bleeding memory? In a shop that's already fluent in Go, where they have the institutional knowledge to do it safely? Sure performance is not going to be the same, but with some care it'll get pretty close.

Sorry I hate to just be a coach commentator. Obviously hindsight is 20/20. Still I think there's a lesson here.


True. I don't understand why many of us programmers are not interested in tools that eliminate the possibility of errors?

* Why do we use memoy-unsafe languages (except when Rust or GC is unusable)? * Why do we use type-unsafe languages, at all? * Why do we use state-unsafe (mutable) languages, at all?

Of course there are exceptions to these - but they are few.


There aren't so many languages that are a) memory safe, b) type safe, and c) thread safe, that additionally offer d) a large enough pool of developers to recruit from.


You don't need (d) but lest anyone realise it. Now get me a Java dev whose been sitting in a chair at a desk for 10k hours.


The blog post makes it seem like the problem was in an nginx module. Looking at the docs [1] it looks like that's a C API; as far as I know writing shared libraries in golang for a C caller isn't really a thing (because the runtime needs to exist). Rust might have better luck here (I _think_ there have been attempts to get rust code loaded by not-rust code), but I haven't kept track.

[1] https://www.nginx.com/resources/wiki/extending/api/main/



And if you need a more expedient fix for existing C/C++ code, there's SaferCPlusPlus[1].

[1] https://github.com/duneroadrunner/SaferCPlusPlus


This could easily happen in Go as well. All that would be needed is to reuse the buffer in between requests, and rely on the buffer length instead of clearing it.

To make it safer you would need to deallocate and reallocate the buffer for each request, but that might be slow. Doing that would fix it for Go, or for C, it would be the same either way.

So I'm not convinced that using Go would have helped here.


It's a good point, but at least with Go the leak would be limited to the allocated buffer. This is probably a case where Rust or C++ might be more helpful. Presumably you wouldn't want to allocate a new (variable sized) buffer each time (particularly in a GC language), but you could create a new (bounds checked) slice[1] / array_view[2] / gsl::span / RandomAccessSection[3] each time.

[1] https://doc.rust-lang.org/nightly/std/slice/

[2] https://github.com/rhysd/array_view

[3] https://github.com/duneroadrunner/SaferCPlusPlus#txscoperand...


"This could easily happen in Go as well."

Not really true. Go operates on slices that panic on out-of-bounds accesses. So, for this to happen in Go you would have to reinvent slices and use a lot of manual C-style code to operate on them, which literally nobody does in Go, because it's too hard.


Recycling memory buffers, like CloudFlare does?

https://blog.cloudflare.com/recycling-memory-buffers-in-go/


I agree. I keep seeing comments about C being the culprit, but in my mind, this is more of a policy issue regarding how any given language initializes and allocates memory.

Sure, in this case, we may see a C-specific bug in play, but I think this sort of bug is more effectively mitigated by forcing buffers to be zero-filled upon allocation and/or deallocation, and perhaps system-wide at the OS level, rather than relying upon language features to cover it.

So - I'm not explicitly defending C here - I just don't think a similar bug could never occur in a "memory safe" language as well.


That is true, but reusing buffers in Go is a lot more deliberate an action than in C. The possibility is still there, but I think it's way harder to mess up.


It is not the fault of the language if you use it wrong.

CloudFlare is to blame here, nothing else.

As for the reason why C, I'm pretty confident they knew what they were doing, and had considered other tools that did not meet all requirements.


Allow me an analogy. It's not the fault of a rope if you use it to cross between two skyscrapers and slip and fall to the ground. But if if you mind your life you use at least a safety cable to tie you to the rope, or use the lifts and cross at road level. There are still cars to watch for, but...

C is simply too dangerous, even the best developer can slip without noticing. There are safer alternatives now, we should start using them at least for new projects.


Security is a requirement. They must have been extremely confident indeed to write something like this in C, where a single mistake can make your program fail in catastrophic ways, with no help whatsoever from the compiler.

If some code has bugs, did the author just "use the language wrong"? People make mistakes, and we can prevent some of them by using better tools.


No, the language is bad if using it wrong can leak sensitive data.

The choice of language is wrong if you pick such a language in a situation where mistakes can lead to safety or security problems.

The first requirement is security.


But - I can't think of a single language in which using it "wrong" might not lead to info leaks? Any language with a runtime has to manage memory somehow at the runtime layer and so similar leaks can occur there depending upon design and implementation, and the wider OS context.

At the whole program/application level, when you create your own data structures, you can find lots of ways to leak them to the world.


No one calls C#, JavaScript and Python memory unsafe because their runtimes are implemented in C. Nor do I expect CF to not use Linux or Nginx because they are written in C. We have to live with C but I expect everyone who does anything safety or security critical to do everything they can to minimize the amount code that is susceptible to this class of bug.

Using a runtime with a safe language on top is a perfectly good example of doing that.

Logic errors causing leaks will always be a threat, but we shouldn't be leaking because of pointer arithmetic problems in custom C code. Not 2017.


Regarding C#, the plan is to increasingly move C++ code to C#, now that they have Roslyn and .NET Native.


You can leak sensitive data with any language. C is not used for web development on the client side, yet people abuse security holes in web apps all the time.


That's not an argument for using C. There are many classes of bugs and using a safe language only protects against one class.

What I'm saying is there is no excuse not to take that protection.


Memory safe languages aren't a panacea. There could just as easily have been a bug in the compiler or standard library with the same result.


Sure... but that probability is equally present in the non memory-safe language, so that doesn't change anything.


I've yet to learn a language where not handling exceptional cases properly did not result in a bug.


The point is not the absence of bugs. In a memory safe language, usually a bug doesn't lead in leaking the content of freed memory.


I've compiled a list of 7,385,121 domains served through cloudflare using several scrapers. https://github.com/pirate/sites-using-cloudflare

The full list is available for download here (23mb) https://github.com/pirate/sites-using-cloudflare/raw/master/...

I will be updating it as I find more domains.


More than 7 million domains... Letting that sink in...

I'm assuming this list is based on DNS records? I wonder what proportion of those offloaded their SSL to Cloudflare.


I had duplicates, it's actually only 4,287,625 (still a lot though).

Fixed the duplicates: https://github.com/pirate/sites-using-cloudflare/raw/master/...


Cloudflare isn't just a security hole in the middle of the internet, they're a protection racket.

If you wanted to pay to DDoS a site, search for "booter" and you'll get a list of sites that will take another site off the internet for money with a flood of traffic.

quezstresser.com webstresser.co topbooter.co instabooter.com booter.xyz critical-boot.com top10booters.com betabooter.com databooter.com

etc. etc. - from the first 30 results I could find 2 booter sites that weren't hosted by Cloudflare.

But hey, pay Cloudflare and your site too can be safe from DDoS attacks...


In what way is this a protection racket? That's sort of like complaining that mob-owned businesses enjoy the same police & fire protection that all other businesses have.


Cloudflare sells protection from the internet attacks through its network. The same company and network facilitates the organisation of those same attacks, and helps keep them anonymous.

That's a high-tech protection racket.


I get this argument. I have made it in the past.

But CF doesn't want to play Internet cop. Everyone who manages a service gets a constant barrage of "someone using your site did something offensive, I want you to kick them off your service!"

CF has decided they are just not going to play the game, at all. Because once they start, then all the piranha come to feast.

I'm not saying this means they aren't a racket, which is charging people money to solve a problem you made. But they do have some good reasons for simply refusing to censor what they offer.


It's not a game, it's policing your own network and keeping your business activities legal. My network has run an abuse desk for 15 years and there are no feasting piranhas (what does that even mean?).

Cloudflare definitely already runs an abuse desk, and ban accounts, they just choose not to ban network abuse tools. They are making the internet a more dangerous place for hosting, then asking you to buy a solution. They could search Google for "booter" and "ddos tool" and whatever else, and flag sites for banning, it's a project an intern could do. But they don't, and they suck for that.


They could ban booters. But then someone else will say "but you allow <some other type of site>! They're clearly bad, you should ban them too". And so they do, and now someone else complains about some other site. Once you start banning sites for the content they hold, where do you draw the line? I don't fault CloudFlare for drawing it at the legal barrier (e.g. no CP).


CloudFlare should not align itself with the adversaries its mission is to protect its users from. This isn't a slippery slope distinction, this is a binary exclusion.


> Once you start banning sites for the content they hold, where do you draw the line?

I mean, you could always just draw the line at booters. Not everything has such a slippery slope.


You can say that. But I guarantee you if they do that, other people will think they should ban other sites too.

Really the only way to avoid the problem is to not play the game, and so that's what CloudFlare does. It's pretty much the only defensible stance to take.


They can draw the line wherever they like, they are under zero obligation to provide a service to anyone they don't want to.


Call it a conflict of interest then. The worse the internet at large becomes, the more money cloudflare makes.


DOS attacks being a bad thing is the whole reason the service exists, so to then group it with "things some people consider offensive" is just double think. If Cloud Flare didn't want to play internet cop in regards to DOS attacks, it would not exist. Since it does, it might as well say the same things with both sides of the mouth.


DDoS attack protection is just one of the services CloudFlare offers. Saying it's the whole reason the service exists suggests that you haven't actually looked at what they do.


Change "the" reason to "one of the main reasons" (and going from the top left to bottom right, the second of their four main features), and notice how my point remains untouched?


Your point is still incorrect. CloudFlare is NOT playing "internet cop" for DOS. They're providing armor against it, but they're not "policing" it in any sense of the word.


So DOS are "clearly bad" when it comes to providing "armour" against them, but a matter of taste, something a person could not possibly have an opinion on when it's a client of theirs?

> they're not "policing" it in any sense of the word.

I didn't say they do, I say it's double faced. Which it is.


Most of those booters are on their free tier, so it's a bit hard to argue it's a racket.

If you want to claim it's unethical... maybe. But if you think about it from their position, it could genuinely get into a slippery slope if you start policing what services you're reverse proxying. Especially considering the rate they're growing now.

Think of it this way: should Google be compelled to remove all search results for all booters and other malware-related services? It's asking a lot.



Problem is, these are just frontends advertising the booter services. They're not serving malware themselves.

Cloudflare, like Google, does have a similar program and does remove websites that are directly hosting malware or phishing pages. They just don't remove the gray-area stuff, like hacking forums and black market customer portals.


It's not a racket. Refusing to police their own customers, and having customers that do bad things that CloudFlare incidentally helps protect against, does not make it a racket.

In a protection racket (or more accurately an extortion racket), businesses that don't pay up will get attacked by the racketeers, and so for the most part paying up just means the racketeer won't attack them. That doesn't even remotely describe CloudFlare. Whether or not you pay for CloudFlare doesn't affect whether some other customer of CloudFlare attacks you. And the fact that those other customers are using CloudFlare themselves does not make CloudFlare responsible for their actions.


Another implication: they could be using their access to these sites' traffic to prepare their own infrastructure for attacks before they happen.

There's nothing about their hosting of these sites that doesn't reek.


That is how DDOS protection works, learning from data and scale to better defend future attacks. Every large network and security operator does this. What is your issue with that exactly?


I don't mean they are learning from observing attack traffic, they have access to the command and control traffic.

That means they could know about an attack before it happens.

They could know how long it will last, who the target will be, and what volume of traffic to expect.

They could know who had ordered it, who had paid for it.

They. Also. Sell. Protection.

To call the situation deeply conflicting is an understatement.


Given that the whole class of operators seem somewhat shady, I imagine that sometimes they would need to fend off attacks from competition services. In that case, being on a free DDoS protection plan seems like a reasonable thing to do (from their point of view). As long as they're not initiating the DDoS via Cloudflare, I'm not sure how that would be unreasonable of CF's part, given that I assume it's all automated and nobody ever looks at what sites have signed up.

I don't like CF for their fishy SSL architecture, the increased centralization of internet traffic, and the constant captchas when using tor, but the DDoS protection part (regardless to what sort of people they're providing service for) seems fine.


I don't really understand your point. In 2012 I was working on a startup that was DDoS'd and it was not fun. This was back before Cloudflare offered a DDoS service and we ended up having to hire a random company in Canada to help get us back online. At the time there were surprisingly few people out there offering DDoS mitigation. Cloudflare wanted to help us but they were still in early development for their service, but I remember them being good guys. What's wrong with providing a service to help fight the bad guys?


He is pointing out that cloudflare is also hosting the DDOS sites.

Those are sites that you can go to to pay for a DDOS.

So they are taking money from the people who you are paying them to defend you against.


The problem comes from the conflict of interest when you're also hosting the bad guys


It's the same stance that antivirus developers have always had, more or less. As usual, the difference between blackhat and whitehat is very, very thin - if there is a difference at all.


Be careful posting random domains.HN might flag/throttle your account for spamming.happened to one of my accounts.


That's rare but possible. If you weren't spamming, I'm sorry. Let us know at hn@ycombinator.com and we'll fix it.


Thanks for the reply. it has been a while and i don't even remember the username of that account.it wasn't that important to me (plus, a relatively new account) so i didn't bother contacting HN.


By the same logic, the search engine you used to find those sites is also a "protection racket".


Really? That search engine sells DDOS protection?


Sure. I just searched for "ddos stresser" on three different search engines. The first 1-3 results were ads for Cloudflare and similar services, followed by organic results for several of the sites mentioned above. One could make the same dubious argument that it's a protection racket (free exposure for the "bad guys" while profiting from mitigation).


You are essentially arguing against freedom of speech. Cloudflare will protect any site that doesn't host child porn. Yes that includes things which you don't like, but it also includes all the things you do.


> Cloudflare will protect any site that doesn't host child porn

Doesn't that make it worse? They aren't saying they don't or won't police the content they protect. They are obviously capable and willing to draw a line on ethical or legal grounds, if they have done so in that case. They have just chosen to draw that line on one side of porn but another side of DDoS services.

Ultimately it is their decision to make, but I don't think it's unfair for people to question possible conflicts of interest in how that decision is made.


Why are you combining legal and ethical? They're capable and willing to draw a line on legal grounds. Seems pretty clear.


Not combining, that's why I said or.

And I said that because I'm not sure why they've made that decision...it could have been either or both. And sale of DDoS service is arguably illegal in at least some places, so they obviously aren't rejecting all illegal content.


I ... didn't say any of that.


DDoS attacks are the ultimate form of censorship.


You don't understand what freedom of speech actually means.