Hacker News new | past | comments | ask | show | jobs | submit login

Oh, my god.

Read the whole event log.

If you were behind Cloudflare and it was proxying sensitive data (the contents of HTTP POSTs, &c), they've potentially been spraying it into caches all across the Internet; it was so bad that Tavis found it by accident just looking through Google search results.

The crazy thing here is that the Project Zero people were joking last night about a disclosure that was going to keep everyone at work late today. And, this morning, Google announced the SHA-1 collision, which everyone (including the insiders who leaked that the SHA-1 collision was coming) thought was the big announcement.

Nope. A SHA-1 collision, it turns out, is the minor security news of the day.

This is approximately as bad as it ever gets. A significant number of companies probably need to compose customer notifications; it's, at this point, very difficult to rule out unauthorized disclosure of anything that traversed Cloudflare.




In case you're wondering how this could be worse than Heartbleed:

Yes, apparently the allocation patterns inside Cloudflare mean TLS keys aren't exposed to this vulnerability.

But Heartbleed happened at the TLS layer. To get secrets from Heartbleed, you had to make a particular TLS request that nobody normally makes.

Cloudbleed is a bug in Cloudflare's HTML parser, and the secrets it discloses are mixed in with, apparently, HTTP response data. The modern web is designed to cache HTTP responses aggressively, so whatever secrets Cloudflare revealed could be saved in random caches indefinitely.

You really want to see Cloudflare spend more time discussing how they've quantified the leak here.


You really want to see Cloudflare spend more time discussing how they've quantified the leak here.

What would you like to see? The SAFE_CHAR logging allowed us to get data on the rate which is how I got the % of requests figure.


How many different sites? Your team sent a list to Tavis's team. How many entries were on the list?


We identified 3,438 unique domains. I'm not sure if those were all sent to Tavis because we were only sending him things that we wanted purged.


3438 domains which someone could have queried, but potentially data from any site which had "recently" passed through Cloudflare would be exposed in response, right? Purging those results helps with search engines, but a hypothetical malicious secret crawler would still potentially have any data from any site.


It doesn't have to be a secret crawler. Just one that wasn't contacted by cloudflare (I didn't see any non-US search providers mentioned).


In other words, Baidu are currently sitting on a treasure trove of keys and passwords.


Possibly not, Baidu and CloudFlare have a well-documented long-term partnership.


Maybe there's much more to worry about Baidu's particular not-so-well documented, but longer-term partnerships.


Oh, absolutely. Baidu's relationship with their host nation should be a source of concern for us all. I've heard some interesting and unusual stories.

But they're probably aware of this issue and know enough to go looking to purge their caches.


Or Baidu know enough to not purge their caches. Think of the amount of tangible gratitude that their host nation would show them for access to some potentially tasty information....


Swap baidu for google or microsoft in that sentence and it still has the same problems. Every government 3 letter agency has a vested interest in the secrets.


Whether you believe it or not, there is actually a tangible difference between the relationships US corporations have with the USG vs other nations and their corporate entities.


They're not all 3 letters. (e.g. GHCQ, ASIO, CSIS, DGSI, etc.)


It's an expression


Well, purge their public cache, after taking a private dump and supplying it to those who would find value in such a thing.


>"I've heard some interesting and unusual stories."

Do you care to share or elaborate on this?


They're not my stories to share, I'm afraid.


+Yandex


I wonder if archive.org or archive.is have anything cached...


archive.is was red, meaning it uses Cloudflare....

www.doesitusecloudflare.com


The concern isn't that they use Cloudflare. The concern is that they're spidering the Internet, and therefore might be storing cached data that Cloudflare leaked.


while the internet archive / wayback machine do spider, I think archive.is only archives a site "on demand"


Yes but with all the people and even automated 3rd-party scripts making use of archive.is, it is practically a spider.


No TLS on this site?


correct


Have you asked them for an eta on your shirt?


You know a company isn't serious about security when their top security bounty is a t-shirt. Instagram has a better policy, for God's sake.


Instagram has been part of Facebook for over four years, so they are covered by the Facebook Bug Bounty: https://www.facebook.com/whitehat


I'd love to see some evidence that big bounties correspond to more exploits being found. In my experience, they tend to result in an increasing number amount of crap for your security team to sort through.


Plenty of companies that are serious about security don't do bounties. They're a real pain to administer apparently


I'd expect for a company that can MITM a good chunk of the Internet to incur that pain in exchange for all the money customer pay them.


fuck :(


Indeed, this is the point in the comment thread where you get the feeling the internet is broken.


What I'm wondering: how many fuckups like this need to happen for website owners to realize that uber-centralization of vital online infrastructure is a bad idea?

But I guess there is really no incentive for anyone in particular to do anything about this, because it provides a kind of perverted safety in numbers. "It's not just our website that had this issue, it's, like, everyone's shared problem." The same principle applies to uber-hosting providers like AWS and Azure, as well as those creepy worldwide CDNs.

Interestingly, it seems this is one of the cases where using a smaller provider with the same issue would really make you better off (relatively speaking) because there would be fewer servers leaking your data.


Cheaply fix DDoS attacks as Cloudflare does and people will move away. It's a big problem and the general consensus is, "just use Cloudflare to fix your DDoS problem!"


You might as well scrap http entirely, with or without the "s".

The web simply doesn't scale. The only way to fix DDoS reliably is peer-to-peer protocols. Which hardly ever happens because our moronic ISPs believed nobody needed upload. Or even a public IP address.


as someone who has been involved in a number of moronic ISP designs, operations, and build outs --- asymmetric access networks are designed that way due to actual traffic patterns and physical medium constraints.

you can argue "if everything was symmetric, then traffic patterns would be different" and you might be right, but that's not how the market went or how the "internet" started.

the client-server paradigm drove traffic patterns, and there was never any market demand or advantage by ignoring it.


That's not how the market went because the market is often moronic. Case in point: QWERTY. (Why QWERTY is actually the best layout ever is left as an exercise to the occasional extremist libertarian)

Yes, traffic patterns at the time was heavily slanted towards downloads. I know about copper wires and how download and upload limit each other. Still, setting that situation in stone was very limiting. It's a self fulfilling prophecy.

You don't want to host your server at home because you don't have upload. The ISP sees nobody has servers at home so they conclude nobody needs upload. Peer-to-peer file sharing and distribution is slower than YouTube because nobody has any upload. Therefore everybody uses YouTube, and the ISP concludes nobody uses peer-to-peer distribution networks.

And so on and so forth. It's the same trend that effectively forbid people to send e-mail from home (they have to ask a big shot provider such as Gmail to do it for them, with MITM spying and advertisement), or the rise of ISP-level NAT, instead of giving everyone a public IPv6 address like they all deserve (including on mobile).

There is a point where you have to realise the internet is increasingly centralised at every level because powerful special interests want it to be that way.

Regulation is what we need. Net neutrality is a start. Next in line should be mandated symmetric bandwidth, no ISP-wide firewall (the local router can have safe default settings), public IP (v4 or v6) for everyone, and no restriction on usage patterns (the ISP should not be allowed to forbid servers). Ultimately, our freedom of expression and freedom of information depends on this. They are messing with human rights.


> Peer-to-peer file sharing and distribution is slower than YouTube because nobody has any upload.

And because IP multicast doesn't work over the internet. If it did, even if merely to some limited extent, some asymmetries would be far easier to stomach.


> you can argue "if everything was symmetric, then traffic patterns would be different" and you might be right, but that's not how the market went or how the "internet" started.

It may not have been how the market went but it definitely was how the internet got started.


You say this as I look at my positively anemic upstream that makes browsing even simple Nagios pages painfully slow, and my ISP that doesn't offer anything substantively better without a massive increase in monthly costs.

The traffic patterns for higher upstream aren't there because they can't be there.


Decentralisation doesn't do a whole lot better. Just think about MTA or DNS vulnerabilities, for a start.


Or look at how many websites are still vulnerable to Heartbleed.


The Internet will remain periodically broken until we put a cost metric on the breaking (and working) times.


Which means any user who has used any service which uses CloudFlare, right? At least in theory.


How can I find out which services I have accounts with are using cloudflare? Or better have been using cloudflare in recent months? Assume I have a list of domains, where I have accounts.


We're compiling a list of affected domains using several scrapers here:

https://github.com/pirate/sites-using-cloudflare


I ranked your list of Cloudflare-using domains by their Alexa rank.

Sharing here in case anyone else finds it useful

(warning - it's 1.1MiB gzipped / 2.4MiB uncompressed)

https://polarisedlight.com/tmp/cf_ranked.txt

any domains outside the top 1 million are ommitted


Hacked this together to determine which ones out of the list are potentially using cloudflare reverse proxies. You could also send an HTTP request to them and look for the cloudflare-nginx Server header.

https://gist.github.com/dustyfresh/4d8d364ca4c6da465cfc7d817...


You can check IP whois records, but it'll be very hard to be 100% sure about any of them. For example, one of the examples from the bug report is Uber, which doesn't use Cloudflare for its home page but apparently does for one of its internal API endpoints.


There is a chrome extension named "claire"[1] which tells you if they use CloudFlare or not, but not sure about other browser (FF or else).

[1]: https://chrome.google.com/webstore/detail/claire/fgbpcgddpmj...


For Firefox, I just made this: https://github.com/traktofon/cf-detect


At this point, I would just start rolling everything. (And I have.)


[edit: correction]


No. 3438 domains were configured to expose this, and were potentially queried and logged by a far greater number of people. And yet other data (anything in cloudflare for months) could be exposed.

Potentially huge amounts of stuff might be exposed, but I have some assurances that "the practical impact is low" from someone I trust, so I think it's just a lot of random data. I'd still rotate all credentials which passed through Cloudflare in the past N months (and if I were a big consumer site NOT on Cloudflare, I might change end user passwords anyway, due to re-use), but I don't think it will be the end of the world.


It may seem like a nightmare Internet data security scenario, but it looks like Tavis is going to get a free t-shirt out of the deal, so let's just call it a wash.


What anomalies would be apparent in your logs if someone malicious had discovered this flaw and used it to generate a large corpus of leaked HTTP content?


That's also what I'm interested in. There's a lot of talk about the sites that had the features enabled that allowed the data to escape, but it's the sites that were co-existing with those that were in danger.

In terms of the caching, knowing the broken sites tells you where to look in the caches after the fact, but do you have any idea of who's data was leaked? Presumably 2 consecutive requests to the same malformed page could/would leak different data.


> Presumably 2 consecutive requests to the same malformed page could/would leak different data.

Wouldn't the second request be served from the CDN cache? Since for Cloudfare that particular page is a valid cached page, it would send you that same page on the second request.


Only if the leaked memory is in the response before the response is cached.


I don't know enough about the layers in the cloudflare system to say. Does it only apply to cached pages? What about https? They would have the ssl termination first and then these errant servers behind that - none of those pages would be cached, right?


Cloudflare doesn't cache HTML pages by default.


it seems to me you'd have to know at a minimum:

1. every tag pattern that triggers the bug(s)

2. which broken pages with that pattern were requested at an abnormally high frequency or had an unusually short TTL (or some other useful heuristic)

3. on which servers, and at what time, in order to tell

4. who's data lived on the same servers at the same time as those broken pages

to even begin to estimate the scope of the leak. and that doesn't even help you find who planted the bad seeds.


Here's a question your blog post doesn't answer but should, right now:

Exactly which search engines and cache providers did you work with to scrub leaked data?


Also, have you worked with any search engine to notify affected customers.

ex: Right now there is in an easily found google cached page with OAuth tokens for very popular fitness wearable's android API endpoints


Are you guys planning to release the list so we can all change our passwords on affected services? Or are you planning on letting those services handle the communication?


That list contains domains where the bug was triggered. The information exposed through the bug though can be from any domain that uses Cloudflare.

So: all services that have one or more domains served through Cloudflare may be affected.

The consensus seem to be that no one discovered this before now, and no bad guys have been scraping this leak for valuable data (passwords, OAuth tokens, PII, other secrets). But the data still was saved all over the world in web caches. So the bad guys are now probably after those. Though I don't know how much 'useful' data they would be able to extract, and what the risks for an average internet user are.


> The consensus seem to be that no one discovered this before now, and no bad guys have been scraping this leak for valuable data (passwords, OAuth tokens, PII, other secrets).

This is literally as bad as it gets, anyone trying to palliate the solution has something to sell you. You'd have to be an idiot to think that $organization (public, private, or shadow) doesn't have automated systems to check for something as stupid simple as this by querying resources at random intervals and searching for artifacts.

Someone found it. Probably more than one someone. Denial won't help.


Ah, gotcha. Thanks for explaining!


Myself and 4 other people I know all happened to get their reddit accounts temporarily locked due to a "possible compromise" in the past week or so, which has never happened to any of us before. Anyone else?


That would be unrelated to this. We haven't taken any action on any accounts because of this issue and have no plans to, as we (reddit.com) were unaffected.


Happened to me as well. If it's not related to CloudBleed, can you tell us specifically what happened? It's making me not trust Reddit.


If anything, it should make you trust reddit more! I don't know the exact details as to why your account may have been locked, but generally it will be because we're being proactive and have some signal that your account is using a weak or reused password.


Why was reddit on the list of affected sites, and how do you know reddit wasn't affected?


My reddit password failed a week ago, and I had to do an email reset. And I use a password manager.


In that case I'm even more inclined to think it might be because of Cloudbleed.


I've compiled a list of 7,385,121 domains that use Cloudflare here: https://github.com/pirate/sites-using-cloudflare


This list is misguided. It's just a dump of sites using Cloudflare's DNS, a hugely popular and (mostly) free service. The vulnerability only affected customers using Cloudflare's paid SSL proxy (CDN) service. The latter is a much smaller subset. Even then, only a subset of the SSL proxy users, those with certain options enabled that caused traffic to go through a vulnerable parser, were really impacted. I'm not sure a list as broad as this is helpful.


At least some of this is incorrect. The issue is NOT the pages running through the parser — the issue is the traffic running through the same nginx instance as vulnerable pages.


You are right in that other sites are affected but only the sites running through the parser would have leaked content in their cached pages.


This is not correct in my understanding: The sites with certain options enabled produced the erroneous behavior, but the data that would get leaked through this behavior could be from any site that uses Cloudflare SSL (as this requires Cloudflare to tunnel SSL traffic through their servers, decrypt it and re-encrypt it with their wildcard certificate). So if I understand correctly anyone using the (free) Cloudflare SSL service in combination with their DNS is affected.


I was wrong about the nature of the proxy issue, but right about DNS-only customers. Customers using only the free DNS service were not impacted by this at all, because traffic never flowed through the proxies.


Ah yes, sure if you only use DNS then your data never touches a CloudFlare server. Lucky you ;)


(whoops forgot to remove dupes, it's only 4,287,625) https://github.com/pirate/sites-using-cloudflare/raw/master/...


If I'm understanding correctly, that list would include not only the 3,438 domains with content that triggered the bug, but every Cloudflare customer between 2016-09-22 and 2017-02-18.


Can we trust it was only those domains?


Not really. If a site is using Cloudflare protection for only some of their subdomains they do not show on this list even if the site itself is in the alexa top 10k sites.

And of course all other sites that are not in alexa 10k are not in this list (if they are not on some other lists used, you can see the source of lists in the README of the Github repo).


No. Only Cloudflare customers using a subset of features of the SSL proxy service are impacted.

Cloudflare has a lot of customers who only use the free DNS service, for example.


Careful. It appears that any Cloudflare client who was sending HTTP/S traffic through their proxies is affected. A small subset of their customers had the specific problem that triggered the bug, but once triggered, the bug disclosed secrets from all their web customers.

You're not exposed if you never sent traffic through their proxies; for instance, if you somehow only used them for DNS.


I suspect there are a large number of Cloudflare customers that only use their DNS. I have a couple of domains in this category.

The DNS service is essentially free. It's an upgrade from most registrars' built-in DNS. It's a pretty robust solution, really -- global footprint, DNSSEC, fully working IPv6, etc.

My point is, the actual number of impacted customers was much smaller than the entire set of Cloudflare customers. There are lists in this thread that still reference hundreds of thousands (millions?) of sites, and that's just wrong.

(I agree on your first point though; I was confused about the nature of the proxy bug at first).


What I find remarkable is that the owners of those sites weren't ever aware of this issue. If customers were receiving random chunks of raw nginx memory embedded in pages on my site, I'd probably have heard about it from someone sooner, surely?

I guess there is a long tail of pages on the internet whose primary purpose is to be crawled by google and serve as search landing pages - but again, if I had a bug in the HTML in one of my SEO pages that caused googlebot to see it as full of nonsense, I'd see that in my analytics because a page full of uninitialized nginx memory is not going to be an effective pagerank booster.


Perhaps as a follow up to this bug, you can write a temporary rule to log the domain of any http responses with malformed HTML that would have triggered a memory leak. That way you can patch the bug immediately, and observe future traffic to find the domains that were most likely affected by the bug when it was running.

Or is the problem that one domain can trigger the memory leak, and another (unpredictable) domain is the "victim" that has its data dumped from memory?


I believe that's the real issue. Any data from any couldflare site may have been leaked. Those domains allow Google etc to know which pages in their cache may contain leaked info, unfortunately the info itself could be from any request that's travelled through cloudflare's servers.


Yes, the victim can be a different site. Cloudflare's post mentions this: " Because Cloudflare operates a large, shared infrastructure an HTTP request to a Cloudflare web site that was vulnerable to this problem could reveal information about an unrelated other Cloudflare site. " https://blog.cloudflare.com/incident-report-on-memory-leak-c...


It shouldn't be too difficult to feed an instrumented copy of the parser some fraction of their cached pages (after all, that's what they're for.. right?) and calculate a percentage of how many triggered e.g. valgrind, or just some magic string tacked on the end of the input appearing in the output or similar

I prefer CloudScare to Cloudbleed :)


Downpour is my preference right now. The clouds are dumping everything they got


How about Cloudburst?


CloudBust


If only CloudShare wasn't a thing already. :)


I'd suggest "FlareOut".


Cloudflush.


ShitFest


It is far from over, too! Google Cache still has loads of sensitive information, a link away!

Look at this, click on the downward arrow, "Cached": https://www.google.com/search?q="CF-Host-Origin-IP:"+"author...

(And then, in Google Cache, "view source", search for "authorization".)

(Various combinations of HTTP headers to search for yield more results.)


> The infosec team worked to identify URIs in search engine caches that had leaked memory and get them purged. With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory. Those 770 unique URIs covered 161 unique domains. The leaked memory has been purged with the help of the search engines.

So I tried it too, and there's still data cached there.

Am I misunderstanding something - that above statement must be wrong, surely?

They can't have found everything even in the big search engines if it's still showing up in Google's cache, let alone the infinity other caches around the place.

EDIT: If the cloudflare team sees I see leaked credentials for these domains:

android-cdn-api.fitbit.com

iphone-cdn-client.fitbit.com

api-v2launch.trakt.tv


I'm also seeing a ton from cn-dc1.uber.com with oauth, cookies and even geolocation info. https://webcache.googleusercontent.com/search?q=cache:VlVylT...


That's terrifying.

Thanks to Uber now requiring location services on Always instead of just when hailing a car, my and others' personal location history even outside of Uber usage could have been compromised. Sweet.


To be fair, you were kind of a fool if you actually let Uber have your location at all times. As soon as they announced that I blocked Uber from my location. I only allow it when I take an Uber (which is almost never now).


Sometimes I'm in a rush and forget to turn it back to Never.

That doesn't make me a fool, it makes me human. Don't be a jerk. It's a dark pattern for a reason.


If you only sometimes forget, then that's not letting them have your location at all times, and you weren't called a fool.


Not a fool but ...


At least the location isn't embarrassing.[1]

[1] https://goo.gl/maps/FjQVttcZCpH2


Oh my gosh, that's the Ivey Business School, where I graduated from last year. I didn't expect this to hit so close to home...


so sorry for your loss


What did it show before it was taken down? In vague terms, of course.


Could someone enlighten me on why malloc and free don't automatically zero memory by default?

Someone pointed me to MALLOC_PERTURB_ and I've just run a few test programs with it set - including a stage1 GCC compile, which granted may not be the best test - and it really doesn't dent performance by much. (edit: noticeably, at all, in fact)

People who prefer extreme performance over prudent security should be the ones forced to mess about with extra settings, anyway.


Some old IBM environments initialized fresh allocations to 0xDEADBEEF, which had the advantage that the result you got from using such memory would (usually) be obviously incorrect. The fact that it was done decades ago is pretty good evidence that it's not about the actual initialization cost: these things cost a lot more back then.

What changed is the paged memory model: modern systems don't actually tie an address to a page of physical RAM until the first time you try to use it (or something else on that page). Initializing the memory on malloc() would "waste" memory in some cases, where the allocation spans multiple pages and you don't end up using the whole thing. Some software assumes this, and would use quite a bit of extra RAM if malloc() automatically wiped memory. It would also tend to chew through your CPU cache, which mattered less in the past because any nontrivial operation already did that.

I personally don't think this is a good enough reason, but it is a little more than just a minor performance issue.

That all being said, while it would likely have helped slightly in this case, it would not solve the problem: active allocations would still be revealed.


> Some old IBM environments initialized fresh allocations to 0xDEADBEEF, which had the advantage that the result you got from using such memory would (usually) be obviously incorrect.

On BSDs, malloc.conf can still be configured to do that: on OpenBSD, junking (fills allocations with 0xdb and deallocations with 0xdf) is enabled by default on small allocations, "J" will enable it for all allocations. On FreeBSD, "J" will initialise all allocations with 0xa5 and deallocations with 0x5a.


> What changed is the paged memory model: modern systems don't actually tie an address to a page of physical RAM until the first time you try to use it (or something else on that page). Initializing the memory on malloc() would "waste" memory in some cases, where the allocation spans multiple pages and you don't end up using the whole thing. Some software assumes this, and would use quite a bit of extra RAM if malloc() automatically wiped memory. It would also tend to chew through your CPU cache, which mattered less in the past because any nontrivial operation already did that.

Maybe an alternative approach is to simply mark the pages to be lazily zeroed out when attached, in the Page Table Entries of the MMU. They wouldn't be zeroed out at the time of the call malloc(), but only when they are attached to a physical memory location (the first time you use it).


And it seems to me the OS should ensure the pages are zero'd out rather than user space (via malloc()) doing it, because it's still a security hole to let a process read data that it's not supposed to have access to (whether it's from another process or the kernel - it doesn't matter).


OS already zeroes out pages, obviously. But malloc doesn't usually request memory to the OS but takes a chunk from the already allocated heap.


Unsure, not my job. But I read stuff along those lines. A modern OS plays all sorts of games to delay doing work. Allocate a couple of megs of memory and the OS sets up some pointers in a page table. And yes it'll keep already zero'd pages handy. And mark pages as dirty to be scraped clean later.


It doesn't need to affect your CPU cache, because x64 processors have non-temporal writes (streaming stores) that bypass the cache.

The stuff about eagerly allocating pages is spot on though.

There is calloc which allocates and zeroes memory, but people don't use it as often as they should.


Parsers don't usually need to hold onto what they're parsing for a very long time, so unless they were running this parallel on a machine with 4k cores, I'd imagine it would be much more likely that a buffer overrun hits the middle of an already-freed allocation rather than going into an active one.

In terms of "wasting" memory, perhaps the kernel could detect that you are writing 0s to a COW 0 page and still not actually tie the page to physical RAM. (If you're overwriting non-0 data, well it's already in a physical page.)

I don't quite follow the details of the CPU cache issue and why that is more-than-minor.

I do think in this day and age we should be re-visiting this question seriously in our C standard libraries. If the performance issues are actually major problems for specific systems, the old behaviour could be kept, but after benchmarking to show that it really is a performance problem.


In terms of "wasting" memory, perhaps the kernel could detect that you are writing 0s to a COW 0 page and still not actually tie the page to physical RAM.

Writing to your COW zero page causes a page fault. Now, in theory you could disassemble the executing instruction and if it's some kind of zero write, just bump the instruction pointer and go back to userspace - but then the very next instruction in your loop that zeroes the next 8 bytes will cause the same page fault. And the next. And the next...

Taking a page fault for every 8 bytes in your allocation is completely infeasible. You'd be better off taking the hit of the additional memory usage.


How about this idea: free() zeros or unmaps all memory it allocated. This shouldn't fault. The OS zeros pages when mapping them into the process space (which it should do anyway). I think that solves the problem.


free() doesn't know what portion of the memory you allocated actually got written to. So for the model where a large, page-spanning buffer is allocated and only a small portion used, this approach causes many unnecessary page faults at free () time as it tries to zero out lots of memory that was never used or paged in at all.


Large buffers just get unmmaped so the OS can fix that problem.


An invariant you get from most kernels is that all new memory pages are zeroed when mapped into processes (normally through mmap or sbrk), so you only have the paging problem when initializing with a value other than zero.


Zeroing on malloc and/or free would not have prevented this type of error, since the information disclosure was due to an overflow into an adjacent allocated buffer.

However, zeroing on free is generally a useful defense-in-depth measure because can minimize the risk of some types of information disclosure vulnerabilities. If you use grsecurity, this feature is provided by grsecurity's PAX_MEMORY_SANITIZE [0].

[0]: https://en.wikibooks.org/wiki/Grsecurity/Appendix/Grsecurity...


Zeroing on alloc/free probably wouldn't have helped much with this bug. Data in live allocations would still be leaked.


> Could someone enlighten me on why malloc and free don't automatically zero memory by default?

The computational cost of doing so, I suspect.


Just like why most filesystems don't zero deleted files.


Neither of these are good reasons: I already talked about MALLOC_PERTURB_ (man mallopt) in my post and my naive performance tests, and we rarely get bad security holes based on data from deleted files left on filesystems.


Unfortunately, people write microbenchmarks of malloc and free a lot (and not completely without reason: they do quite often show up high in profiles).

For example, binary-trees on the Benchmarks Game is basically malloc/free bound (or at least is supposed to be as Hans Boehm originally designed it). Likewise, most JavaScript benchmarks (V8 splay, for example) are heavily influenced by raw allocation performance. Many people choose browsers and programming languages based on relatively small differences in these results. All of the incentives align in favor of performance, not security, because performance is easy to measure and security is not.


You asked for a reason, not for a good reason.

malloc/free were designed around 1972. That was a time where performance was much more important and security concerns didn't really exists.

Modern systems, like Go, do zero-out newly allocated memory because they do consider a bit more security to be more important than a bit more performance.

But changing the defaults of malloc/free is not really an option and it would probably break stuff.

Especially on Linux, where, I believe, malloc returns uncommitted pages, which increases the perf advantage in some cases.

Security conscious programmers can use calloc() or write their own wrappers over malloc/free.


they aren't good reasons now. They were good reasons ~20 years ago.

language spec should probably now default to zeroing memory unless you specifically ask it not to....and maybe that should be a verbose option :)


Are these results hardware independent? Maybe it makes a difference on older machines, or different architectures.


I imagine clearing memory on free is more relevant than MALLOC_PERTURB_?


calloc zeroes memory on allocation.


Yes, I think the question was something like "why doesn't malloc call calloc?".


Always nice to have options. Not zeroing memory on allocation might save a few cpu cycles.


It's pretty much the definition of false economy. Would you rather save a few cycles or suffer debilitating security bugs at random intervals? Always use calloc unless a) there's a proven performance problem and b) you know for a fact that due to careful inspection/static analysis/black magic malloc is safe. Then use calloc anyway because why risk it?


It depends on the size of the chunk of allocated memory. If it is quite large, time spent zeroing it can be substantial. Then again, if you're allocating in performance critical path, you're doing it wrong anyways.


It takes time to do that.


> that above statement must be wrong, surely?

Either they believe it's right, which means they're not competent enough to really assess the scope of the leak; or they don't believe it, but they went "fuck it, that's the best we can do".

In either case, it doesn't really inspire trust in their service.


you missed one possibility: that they're deliberately attempting to downplay the severity to make themselves look less incompetent


jgrahamc: can you list which public caches you worked with to attempt to address this? It does not inspire confidence when even google is still showing obvious results


Google, Microsoft Bing, Yahoo, DDG, Baidu, Yandex, and more. The caches other than Google were quick to clear and we've not been able to find active data on them any longer. We have a team that is continuing to search these and other potential caches online and our support team has been briefed to forward any reports immediately to this team.

I agree it's troubling that Google is taking so long. We were working with them to coordinate disclosure after their caches were cleared. While I am thankful to the Project Zero team for their informing us of the issue quickly, I'm troubled that they went ahead with disclosure before Google crawl team could complete the refresh of their own cache. We have continued to escalate this within Google to get the crawl team to prioritize the clearing of their caches as that is the highest priority remaining remediation step.



Thousands of years from now, when biological life on this planet is all but extinct and superintelligent AI evolving at incomprehensible rates roam the planet, new pieces of the great PII pollution incident that CloudFlare vomited across the internet are still going to be discovered on a daily basis.


I was expecting this:

Thousands of years from now, when biological life on this planet is all but extinct and superintelligent AI evolving at incomprehensible rates roam the planet, taviso will still be finding 0-days impacting billions of machines on an hourly basis.

Be glad that Google is employing him and not some random intelligence agency.


I have huge respect for taviso and his team. Their track record in security work is so impressive. They are without a doubt extremely capable.

However, I am always wondering: are they really globally unique in their work and skill? So that they are really the ones finding all the security holes before anyone else does because they are just so much better (and/or with better infrastructure) than anyone else? Or is it more likely that on a global scale there are other teams who at least come close regarding skill and resources, but who are employed by actors less willing to share what they found?

I really do hope Tavis is a once-in-a-lifetime genius when it comes to vulnerability research!


One of the big conservatories in the infosec world are people who sell 0-day exploits to "security companies." Some go for the tens of thousands of dollars. Ranty Ben talked about how some people live off this type of income, when it came up in a panel discussion at Ruxcon 2012.


No he is definitely not alone, some of them work for other security companies, for antivirus companies, some of them are selling found vulnerabilities


What's funny is he kinda just stumbled upon this bug accidentally while making queries.

If I were just casually googling two weeks ago and came across a leaked cloudflare session in the middle of my search results I think I would have vomited all over my desk immediately. Dude must have been sweating bullets and trembling as he reached out on twitter for a contact, not knowing yet how bad this was or for just how long it's been going on.




I believe the 2009 Yahoo-Bing agreement is still in force, where Bing provides search results on Yahoo.com:

http://news.bbc.co.uk/2/hi/business/8174763.stm

I know the search I performed now on Yahoo states "Powered by Bing™" at the bottom.


Yeah, I thought that could be it as well but was at the bottom of the Yahoo result:

<!-- fe072.syc.search.gq1.yahoo.com Sat Feb 25 03:58:27 UTC 2017 -->

Given they are identical results it's pretty clear it must be a shared index I suppose, that or the leaked memory was cached.


Yahoo provides a front end to the search results, Bing provides the crawl/search/archives.


What the hell does Yahoo even do anymore? Just email? Or is that just a proxy to hotmail?


Finance, News, Mail, Fantasy Sports, etc to name a few where they are still in the top three of the category.

Yahoo was never really a search company (even its founding, it was a "directory", not a "search"). Sure, they pretended fairly well from 2004ish (following their move off Google results) to 2009 (when they did the Bing deal), but the company never really nailed search or more importantly search monetization despite acquiring one of the first great search engines (Altavista) and the actual inventor of the tech Google stole for its cash cow Adwords (Overture).


Isn't Yahoo search just a frontend to bing nowadays?


Some IPv6 internal connections, some websocket connections to gateway.discord.gg, rewrite rules for fruityfifty.com's AMP pages, and some internal domain `prox96.39.187.9cf-connecting-ip.com`.

And some sketchy internal variables: `log_only_china`, `http_not_in_china`, `baidu_dns_test`, and `better_tor`.


Exactly, it looks that the cleaning people up to now only looked for the most obvious matches (just searching for the Cloudflare unique strings). There's surely more where "only" the user data are leaked and are still in the caches.


The event where one line of buggy code ('==' instead of '<=') creates global consequences, affecting millions, is great illustration of the perils of monoculture.

And monoculture is the elephant in the room most pretend not to see. The current engineering ideology (it is ideology, not technology) of sycophancy towards big and rich companies, and popular software stacks, is sickening.


How about clearing all the cache? (Or at least everything created the last few months.)

I've never seen anyone suggest it, I suppose It cannot or should not be done for some reason?


You are asking for deleting petabytes of data. Some sides are interested in owning such data.


The real problem is going to be where history matters and you can't delete - for example archive.org and httparchive.org. There is no way to reproduce the content in the archive obviously, so no one will be deleting it. The only way is to start a massive (and I mean MASSIVE) sanitization project...


or clearing all the cache of Cloudflares website. I think that's do-able.


At this moment problem is not in Cloudflare's side, search engines crawled tons of data with leaked information, even though Cloudflare drops their caches, data is already in 3rd party servers (search engines, crawlers, agencies)


That's why he asked that the caches of all Cloudflare sites are dropped, not by Cloudflare but by these 3rd parties.


That might work. If said 3rd parties were interested in helping. Most of them might be but it just takes one party refusing to help and then you've still got the data out there.


no I meant, get a list of all domains using Cloudflare, get that removed from the cache of Crawlers.


Offtopic: "with all due respect" is often followed by words void of respect.


He is British. "With all due respect" means no respect is due. I don't think it's possible to show less respect while appearing polite. In other words, them's fighting words.

http://todayilearned.co.uk/2012/12/04/what-the-british-say-v...


This is perfectly fine if the amount of respect due is sufficiently low.


Given the answers that cloudflare is giving I's say it's quickly approaching zero.


Ha! Excellent point!


Incredible. Are they really trying to pin it on Google? Yes, clearing cache would probably remove some part of the information from public sources. But you can never clear all cache world-wide. Nor can you rely that the part that was removed was really removed before being copied elsewhere.

The way I see it, time given by GZero was sufficient to close the loophole, it was not meant to give them chance to clear caches world-wide. They have a PR disaster on their hands, but blaming Google won't help with it.


You really have to see this to really grasp the severity of the bug.


The scope of this is unreal on so many levels.

20 hours since this post and these entries are still up ...


Can anyone provide some context please ?


For anyone being linked directly to the post: the link back to the parent page is right on top: https://news.ycombinator.com/item?id=13718752

You can also click on "parent", and repeat as necessary.


The bottom of the file has contents from another connection. Notably

    HTTP/1.1
    Host gateway.discord.gg



After 16 hours, those cached pages are still up...


While it is good that you discovered leaked content is still out in the wild, your tone is somewhat condescending and rude. No need for it.


You might not know the history here. Tavis works at Google and discovered the bug. He was extremely helpful and has gone out of his way to help Cloudflare do disaster mitigation, working long hours throughout last weekend and this week.

He discovered one of the worst private information leaks in the history of the internet, and for that, he won the highest reward in their bug bounty: a Cloudflare t-shirt.

They also tried to delay disclosure and wouldn't send him drafts of their disclosure blog post, which, when finally published, significantly downplayed the impact of the leak.

Now, here's the CEO of Cloudflare making it sound like Google was somehow being uncooperative, and also claiming that there's no more leaked private information in the Bing caches.

Wrong and wrong. I'd be annoyed, too.

--

Read the full timeline here: https://bugs.chromium.org/p/project-zero/issues/detail?id=11...


I think this is a one-sided view of what really happened.

I can see a whole team at Cloudflare panicking, trying to solve the issue, trying to communicate with big crawlers trying to evict all of the bad cache they have while trying to craft a blogpost that would save them from a PR catastrophe.

All the while Taviso is just becoming more and more aggressive to get the story out there. 6 freaking days.

short timeline for disclosures are not fun.


There was no panic. I was woken at 0126 UTC the day Tavis got in contact. The immediate priority was shut off the leak, but the larger impact was obvious.

Two questions came to mind: "how do we clean up search engine caches?" (Tavis helped with Google), and "has anyone actively exploited this in the past?"

Internally, I prioritized clean up because we knew that this would become public at some point and I felt we had a duty of care to clean up the mess to protect people.


> "has anyone actively exploited this in the past?"

Has this question been answered yet?


We're continuing to look for any evidence of exploitation. So far I've seen nothing to indicate exploitation.


>> "has anyone actively exploited this in the past?"

Wouldn't your team now even have to decide how to deal with this even after some specific well known caches have been cleared? I mean there's no guarantee that someone may not have collected all this data and use it to target those cloudflare customer sites. Are you planning to ask all your customers to reset all their access credentials and other secrets?


Google Project Zero has two standard disclosure deadlines: 90 days for normal 0days, and 7 days for vulnerabilities that are actively being exploited or otherwise already victimizing people.

There are very good reasons to enforce clear rules like this.

Cloudbleed obviously falls into the second category.

Legally, there's nothing stopping researchers from simply publishing a vulnerability as soon as they find it. The fact that they give the vendor a heads-up at all is a courtesy to the vendor and to their clients.


> The fact that they give the vendor a heads-up at all is a courtesy to the vendor and to their clients.

It is the norm, and it is called responsible disclosure. You're trying to do the less harm, and the less harm is a combination between giving some time to the developers to develop a fix and getting the news out there for customers and customers of customers to be aware of the issue.


With all due respect, they should suffer a pr catastrophe.


In this case I feel your comment is misdirected. Cloudflare was condescending in their own post above in which he was replying to- "I agree it's troubling that Google is taking so long" is a slap in the face to a team that has had to spend a week cleaning up a mess they didn't make. It is absolutely ridiculous that they are shitting on the team that discovered this bug in the first place, and to top it all off they're shitting all over the community as a whole while they downplay and walk the line between blatantly lying and just plan old misleading people.


I would be pretty mad if a website that I was supposed to trust with my data made an untrue statement about how something was taken care of, when it was not, and then publish details of the bug while cache it still out in the wild, and now exploitable by any hacker who was living under a rock during the past few months.


Actually I proxy two of my profitable startup frontend sites with CloudFlare, so I am affected (not really), but giving them the benefit of the doubt as they run a great service and these things happen.


They are well past deserving the benefit of the doubt.

I would also advise you notify your cloud-based services' customers how they might be affected (yes really), trust erosion tends to be contagious.


Agreed. The condescending downplaying tones displayed just aren't acceptable.


We only host our static corporate sites (not apps) and furthermore never used CF email obfuscation, server-side excludes or automatic https rewrites thus not vulnerable.


Hi,

I think you have misunderstood the issue. Just because YOU did not use those services does not mean your data was not leaked. It means that other peoples data was not leaked on YOUR site, but YOUR data could be leaked on other sites that were using these services.


We only host our static corporate sites (not apps)

If this part is true, they're not vulnerable. Only data that was sent to CloudFlare's nginx proxy could have leaked, so if they only proxy their static content, then that's the only content that would leak.

The rest of their comment gives the wrong impression though, yeah.


> Only data that was sent to CloudFlare's nginx proxy could have leaked, so if they only proxy their static content, then that's the only content that would leak.

The way it worked, the bug also leaked data sent by the visitors of the these "static sites": IP addresses, cookies, visited pages etc.


Thanks for clarifying. You are absolutely right.


So far as I know, nothing like this thing has ever happened at any CDN ever before.


There have definitely been incidents where CDNs mixed up content (of the same type) between customers. Not exactly like this, but close.


I find it troubling that the CEO of Cloudflare would attempt to deflect their culpability for a bug this serious onto Google for not cleaning up Cloudflare's mess fast enough.

Don't use CF, and after seeing behavior like this, don't think I will.


On a personal note, I agree with you.

Before Let's Encrypt is available to public use (beta), CF provided "MITM" https for everyone: just use CF and they can issue you a certificate and server https for you. So I tried that with my personal website.

But then I found out that they replace a lot of my HTML, resulting mixed content on the https version they served. This is the support ticket I filed with them:

  On wang.yuxuan.org, the css file is served as:

  <link rel="stylesheet" title="Default" href="inc/style.css" type="text/css" />

  Via cloudflare, it becomes:

  <link rel="stylesheet" title="Default" href="http://wang.yuxuan.org/inc/A.style.css.pagespeed.cf.5Dzr782jVo.css" type="text/css"/>

  This won't work with your free https, as it's mixed content.

  Please change it from http:// to //. Thanks.

  There should be more similar cases.
But CF just refuse to fix that. Their official answer was I should hardcode https. That's bad because I only have https with them, it will break as soon as I leave them (I guess that makes sense to them).

Luckily I have Let's Encrypt now and no longer need them.


Well, the CEO does have beef with Google: https://blog.cloudflare.com/post-mortem-todays-attack-appare...

This led to Cloudflare refusing to implement support for Google Authenticator for 4 years.


lol, really? Google authenticator is just TOTP - it's an open standard. That seems childish.

Also, the notion that the CEO of an internet company would have a "beef with Google" is pretty funny.


This comment greatly lowers my respect for Cloudflare.

Bugs happen to us all; how you deal with this is what counts, and wilful, blatant lying in a transparent attempt to deflect blame from where it belongs (Cloudflare) onto the team that saved your bacon?

I've recommended Cloudflare in the past, and I was planning, with some reservations, to continue to do so even after disclosure of this issue. But seeing this comment? I don't see how I can continue.

(For the sake of maximum clarity: I take issue: 1) with the attempt at suggesting the main issue is in clearing caches, not on the leak itself. It doesn't matter how fast you close the barn door after the horse is gone and the barn has burned down. 2) With the blatantly false claim that non-Google caches have been cleared, or were faster to clear than Google's. Cloudflare should know, better than anyone, the massive scope of this leak, and the fact that NO search engine's cache has or could be cleared of this leak. If you find yourself in a situation so bad you feel like you need to misdirect attention to someone else, and it turns out no one else is actually doing anything so you have to like about that...maybe you should just shut up and stop digging?)


Hey! Don't keep the horse locked in if the barn is burning!


> I agree it's troubling that Google is taking so long.

Google has absolutely no obligation to clean up after your mess.

You should be grateful for any help they and other search engines give you.


You're right, I guess. (Disclaimer: Not affiliated with any company affected / involved)

But I still find it troubling. Is it their mess? No. Does it affect a lot of people negatively - yes. I expect Google to clean this up because they're decent human beings. It's troubling because it's not just CloudFare's mess at this point.

It reminds me of the humorous response to "Am I my brother's keeper?", which is "You're your brother's brother"


Google cleaning this up is going to take a ton of man-hours, which will cost a LOT of money. How much money is Google obligated to spend to help a competitor who fucked up? Are they supposed to just drop everything else and make this the top priority?


I don't see this as them as helping a competitor. The damage has been done (in terms of customer relations).

I view leaving up the cached copy of leaked data as being a jerk move - not towards CloudFare, but to anyone whose data was leaked.

This is an opportunity for Google to show what they do with rather sensitive data leaks - do they leave them up or scrub them?

Had damage from the leak been aleady done (to those whose data it was)? Probably. Even taking that into account, I think the Google search comes off as a jerk in this situation.


I feel like you are operating under the assumption that deleting this leaked data is trivial, that they just have to hit a delete button and the data is gone.

This is not the case; it is not obvious, trivial, or easy to delete the leaked data. It is not simple to find it all. This is not like they are being given a URL and being asked to clear the cached version of it; they are being asked to search through millions of pages for possibly leaked content.


I despise the way you've dealt with this issue with as much dishonesty as you thought you could get away with.

I will be migrating away from your service first thing Monday. I will not use you services again and will ensure that my clients and colleagues are informed of you horrific business practices now and in the future.


Next time, beware of parsers. Or formally verify them :)

https://arxiv.org/pdf/1105.2576.pdf

(disclaimer: co-author)


For this who haven't been following along, this is the CEO of CloudFlare lying in a way that misrepresents a major problem CloudFlare created. Additionally, they are trying to blame parts of this problem on those that told them about the problem they created.


At least tell me they got their t-shirts lol.


>I'm troubled that they went ahead with disclosure before Google crawl team could complete the refresh of their own cache.

It sounded like they (cf) were under a lot of pressure to disclose ASAP from project zero and their 7 day requirement...


eastdakota is one of the cloudflare guys, so "they" in that sentence can only refer to Google (see also the previous paragraph/sentences, where eastdakota used "we" for cloudflare).


He's the CEO


With something this drastic, 7 days was generous.


>> We have continued to escalate this within Google to get the crawl team to prioritize the clearing of their caches as that is the highest priority remaining remediation step.

If you are using the same attitude as you use in this comment, with their team, i'm pretty sure they will be thrilled to keep aside all their regular work and help you out cleaning up a enormous mess created by a bug in your service.


Oh wow, taking a shit on Google after they helped you by reporting a critical flaw in your infrastructure.

I'm no longer using CF for my own projects, but you've just cemented my decision that none of my clients will either.


https://webcache.googleusercontent.com/search?q=cache:lw4K9G...

    Internal Upstream Server Certificate
    ...
    /C=US/ST=California/L=San Francisco/O=Cloudflare Inc./OU=Cloudflare Services - nginx-cache/CN=Internal Upstream Server Certificate
That really doesn't look good.


Just to point out, this is apparently a cert used for communicating between Cloudflare's services which has (presumably) been replaced. Cloudflare customer's certs weren't exposed.


Correct. That's that cert.


Just to be clear: is this a cert used for authenticating with Cloudflare's systems or just for encryption? If used for authentication, you need to ensure it hasn't been stolen and used before this was found by P0.


Lol, Google just purged that search.

EDIT: but there's still plenty of fish: http://webcache.googleusercontent.com/search?q=cache:lw4K9G2...

This will take weeks to clean, and that's just for Google.

EDIT2: found other oauth tokens, lots of fitbit calls... And this just by searching for typical CF internal headers on Google and Bing. There is no way to know what else is out there. What a mess.


Ouch, you really see everything :

> authorization: OAuth oauth_consumer_key ...

what a shit show. I'm sorry but at that point there must be consequences for incompetence. Some might argue "But nobody can't do anything" ...

I'm sorry, CF has the money to to ditch C entirely and rewrite everything from the ground up with a safer language, I don't care what it is, Go,Rust whatever.

At that point people using C directly are playing with fire. C isn't a language for highly distributed applications, it will only distribute memory leaks ... With all the wealth there is in the whole Silicon Valley, trillions of dollars, there is absolutely 0 effort to come up with an acceptable solution? all these startups can't come together and say: "Ok,we're going to design or choose a real safe language and stick to that"? where does all that money goes then? Because this bug is going to cost A LOT OF MONEY to A LOT OF PEOPLE.


These guys were probably saved by using OAuth - there is a consumer secret (which the "_key" is just an identifier for) and an access token secret, both of which are not sent over the wire. Just a signature based on them. (The timestamp and nonce prevent replay attacks.)

OAuth2 "simplified" things and just sends the secret over the wire, trusting SSL to keep things safe.


Does this have anything to do with CloudFlare's ambitious attempt to be the first service to proxy your https traffic to your users?

Perhaps the largest MITM ever eh?


This actually happened because they started to rewrite it all, according to their blog post.


Started to re-write it...in C


Good. They're trying to clean up all the private data leaked everywhere. I tempted to say "why couldn't they figure out this google dork themselves" but they've probably been slammed for the past 7 days cleaning up a bunch of stuff anyway.


You have no idea.


The effort you're putting into cleaning up someone else's mess cannot be understated, nor can it be sufficiently appreciated. Thanks!


Any chance you can describe why these cached pages missed the purge that cloudflare initiated? Seems like cloudflare should have brought an outside expert to try to exploit this issue before the disclosure was made.


For vulnerabilities with immediate exploit exposure, where people are currently being victimized by the flaw, Project Zero has a 7-day embargo.

The short waiting period balances the vendor's interest in coordinating the smoothest fix to the problem with the public's interest in knowing its exposure and maximizing it's options for reacting to the exposure.

The fixed waiting period keeps the process sane. Every vendor you'll ever disclose a serious vulnerability to will try to delay disclosure, usually repeatedly. If you set a precedent of making arbitrary exceptions, you'll never be able to stare anyone down.

Again: as the reporters, you're trying to balance the vendor's interests with those of the public. Your credibility in these situations is pretty important, not just for this vulnerability, but for the next ones. With P0, we all know there will be a long series of "next ones" to be concerned about.


I definitely understand the embargo, but this is one of those situations where the vuln was already fixed and it's likely very few malicious actors (possibly 0, but of course who knows) were aware of its existence.

I feel like adding even just another day or two would've allowed them to purge more of these search results. I think that would greatly outweigh the increased risk of letting it remain undisclosed for slightly longer.


Thank you for your thoughtful reply and realize the difficult situation you are in.


Hah, no, my situation is super easy; it is "partisan bystander." I don't work for Google.


FYI, I'm seeing some more of these results show up (with active caches) for the following searches:

"CF-RAY" "CF-Force-Miss-TS"

"X-SSL-Server-Name"

"Internal Upstream Server Certificate0"


CF-RAY isn't internal and will show up in any CloudFlare hosted site's response headers.


I'm aware of this, but combined with "CF-Force-Miss-TS" that search was turning up a number of clear examples of cached Cloudflare memory data.


Your hard work is appreciated.


Not sure if you'll see this, but I've noticed that the cache links have been removed on literally all hits for these queries.

And yet, I occasionally see working cache links on relevant unaffected pages.

Really, really awesome to see this kind of response. It's an obvious course of action (also considering corporate liability that you're publicly holding/offering this data) but it's really cool to see everyone work to fix this en masse so quickly.

I think a lot of people would enjoy hearing campfire battle stories of the past ~week once this is all over.


Thank you for all your hard work.


> This will take weeks to clean, and that's just for Google.

Couldn't Google just purge all cached documents which match any Cloudflare header? This will probably purge a lot of false positives, but it's just cached data, so would that loss really matter? My guess is that this approach should not take more than a few hours on Google's infrastructure.

Of course, this leaves the problem of all the other non-Google caches out there.


OAuth1 doesn't send the secrets with the requests, just a key to identify the secret and a signature made with the secret.

OAuth2 does send the secret, typically in an "Authorization: Bearer ..." header.

The uber stuff that somebody else linked to looks like a home-grown auth scheme and it appears that "x-uber-token" is a secret, but hard to know for sure.


So while people are having fun here with search queries, how many scripts are already up and running in the wild, scraping every caching service they can think of in creative ways for useful data...

This is an ongoing disaster, wasn't this disclosed too soon?


The "well-known chat service" mentioned by Tavis appears to be Discord, for the record.

edit: Uber also seems to be affected.


>It is a snapshot of the page as it appeared on Feb 21, 2017 20:20:45 GMT

So the issue wasn't fully fixed on Feb 19, or Google's cache date isn't accurate?


It seems like the reasonable thing for Google to do is to clear their entire cache. The whole thing. This is the one thing that they could do to be certain that they aren't caching any of this.


What about Bing, Baidu, Yandex, The Internet Archive, and Common Crawl? What about caches that are surely maintained by the NSA, ФСБ, and 3PLA?


Of course. Google dumping their cache puts only a small dent into the problem, but I feel that it's their responsibility to the innocent site operators caught in the middle of this.


Cloudflare's incompetence isn't Google's responsibility, particularly when Google wiping out their caches and damaging their own search results doesn't fix the problem. Hackers know how to use more than one search engine.


That only gives them an excuse to do nothing about this. All those companies should immediately go ahead and update any data that could have possibly leaked + inform their customers.


CF should be thankful Google is doing any of this, clearing their entire cache would cost Google $ to index web from scratch.


That might be a bit too extreme. But they should do something quickly to try to find all of these.


I would say cloudflare should hire them to try to find them. It's really not on google IMO (unless caching has some implications regarding storing sensitive data).



Wow, I just tried this, the first result with a google cache copy has a bunch of the kind of data described. Although there was only one result with a cache.


The second page had a result with an OAuth2 Bearer token in it.


PII, OAuth data, etc.


I've so far seen an oAuth key for fitbit (via their android app) and api keys for trakt (though apparently that service doesn't use them?)

I don't know, this just seems catastrophic.


I searched for

"CF-Host-Origin-IP:" token

.... uhm is that what I think I'm seeing???


The first couple I looked at were requests to Uber and Fitbit...


One of my Uber rides two weeks ago went completely nuts. Both my and my drivers app screwed up at the same time and I was never picked up and then seconds later the app claimed I reached my destination.

You have to wonder whether something like this is implicated.


That's one phenomenal leap of logic there. Why would you think that?


Merely that both my and the drivers app screwed up at the same time, and have a good chance of hitting the same Uber end-point.

Apps that consume APIs would be more sensitive to unexpected junk than browsers.


But there are so many other much more likely reasons why something like that would have happened, it is quite a leap to think that it is somehow related to this issue.


Without disagreeing, can you give me an example.

And it's just a speculation. Shrug.


One simple explanation could be the road was between very large concrete buildings or the area has some sort of GPS interference (there is one place in Tokyo that jumps my GPS and probably others' by about 300m to the same location every time). Another simple explanation is the software has a bug on when it thinks you arrive in some extremely bizarre scenario (hence you both had it happen simultaneously).

I don't know how it works in the back so this is all speculation of course.


Yep, but I'd already taken an Uber ride from the exact same place the day before. And everything went smoothly.


Probably not.

If someone knew about this exploit they're not going to be messing with people's Uber rides for lulz.


I wasn't implying intention.


this is quite bad. i hope google can put some effort in clearing it's cache too


Time to find out where various "booter" sites are actually hiding.


If anyone here is HIPAA-regulated or you have a customer who is, and you used Cloudflare during those dates, it is Big Red Button time. You've almost certainly got a reportable breach; depending on how tightly you're able to scope it maybe it won't be company-ending.


> If anyone here is HIPAA-regulated or you have a customer who is

Cloudflare certainly does; I founded a health tech company, and Cloudflare was the recommended go-to for health tech startups who needed a CDN while serving PHI.

And this is definitely a reportable breach. Technically any breach is supposed to be reported to HHS, but in reality, a lot of covered entities (e.g. insurers) fail to report smaller breaches (which, as a patient, should terrify you). The big ones, though, are really, really bad, and when reported, the consequences can be very serious and potentially even include serving time, depending on the circumstances.

The reason I can be so confident that this is a reportable breach is that the definition of PHI is so broad that even revealing the existence of information between two known entities can be considered protected information. Anything more specific, like a phone number or DOB, or time of an appointment (even if you don't know who the appointment corresponds to) - that's always protected. And Cloudflare certainly has many of those.


Well HIPAA wouldnt allow your https traffic flow unencrypted through a shared proxy right? This means cloudflare couldnt offer that feature, so they probably didn't?

Just think about the HIPAA document describing a single endpoint of dozens of sensitive datastreams, decrypting and then encrypting them all on the same machine, a machine that does some random HTML parsing for snippet caching on the side.

I don't see that passing review, but perhaps I'm naieve..


From their blog post: https://blog.cloudflare.com/incident-report-on-memory-leak-c...

"Because Cloudflare operates a large, shared infrastructure an HTTP request to a Cloudflare web site that was vulnerable to this problem could reveal information about an unrelated other Cloudflare site."

You don't need to be using this feature, or to be sending malformed HTML yourself - just to be in memory for this Cloudflare process.


Apparently I was incorrect, and HIPAA does not require protected data streams to be isolated from each other. Perhaps I was confusing some other (European) regulation. For HIPAA it seems to be sufficient to promise that everything is secure, that you have documented everything and that you know what to do when stuff goes wrong.

So we should see very quickly that Cloudflare knows what to do when stuff goes wrong.


Why isn't the cloudflare encrypted with HTTPS??


It probably was, but any encrypted data still exists in unencrypted form in the server's memory before it's encrypted and sent out over https. You have to have something to encrypt before you can encrypt it.

The memory leaked by this bug includes that pre-encryption data, which is what we're seeing here.

(At least that's my interpretation, computer security isn't quite my wheelhouse)


Does Cloudflare sign BAAs?


I've also been looking into the same question, and I don't see any external indication that they consider themselves a Business Associate as far as their policies go. I would argue, however, that CloudFlare is a BA by definition if an application is using any of the WAF or SSL proxy functionality.

We've been reaching out to a couple of vendors that do use the proxy functionality (given that the data spill could impact our clients as well). Hoping to resolve the BAA uncertainty in the process too.


Isn't it worse than that? Even if you are not a CF user, if your apps make calls to a third party site protected by CF, you could be at risk (stolen credentials, API keys), and could be attacked using those now.


That's also a bad thing, but you can roll creds and check if anyone has exfiltrated data from your various accounts. You can't roll patient identities. There doesn't appear to be any way to figure out which of your HTTPS pages served in last 6 months are presently publicly exposed.

I feel for folks who lost API keys -- really -- but everyone regulated should be in full-on disaster recovery mode right now.


If you are/were using Cloudflare to cache PHI though their CDN without a BAA, you were likely in breach before this.

Some have suggested that Cloudflare might not be a business associate because of an exception to the definition of business associate known as the "conduit" exception.

Cloudflare is almost certainly not a conduit. HHS's recent guidance on cloud computing takes a very narrow view[0]:

"The conduit exception applies where the only services provided to a covered entity or business associate customer are for transmission of ePHI that do not involve any storage of the information other than on a temporary basis incident to the transmission service."

OCR hasn't clarified what "temporary" means or whether a CDN would qualify, but again, almost certainly not. ISPs qualify, but your data just sits on the CDN indefinitely.

p.s. Hi Patrick and Aditya!

[0] https://www.hhs.gov/hipaa/for-professionals/special-topics/c...


Agree completely with you on this, and based on my experience with OCR, I'd say they would as well. The analogy for a "mere conduit" is the postal service. And that analogy falls apart as soon as you realize that CloudFlare, when being used as an SSL termination point, is opening and repackaging each "letter" on the way to the destination.

I do hate for CloudFlare to be the example for companies playing fast and loose with the rules, but I am hoping we'll have an opportunity in this to clarify the conduit definition a bit more.

Would like to mention that I don't think this declaration applies to every scenario. CloudFlare isn't just one service. I don't see an immediate issue using CloudFlare for DNS on a healthcare app. Neither do I see an issue using CloudFlare as the CDN for static assets. Both of these cases should be evaluated in a risk analysis, but they don't necessitate the level of shared responsibility a BAA entails.


I remember Tavis tweeted Friday night asking for a cloudflare engineer to contact him, and everyone joked that the last thing you want on a Friday evening is an urgent message from tavis ormandy.


That was my tweet believe it or not. I had to turn notifications off on my phone because out of nowhere it was getting bombarded with shares/likes...


I would say the crazy thing is a mere t-shirt as their "bug bounty" top tier award given how they've pitched themselves as an extremely secure service.

https://hackerone.com/cloudflare

I'm sorry but when the reward for breaking into you is basically a massive pinata of personal information...that simply is a bad joke. Security flaws are going to happen and if you aren't going to even offer a reasonable financial reward to report them to you, well, that is just begging to be exploited with a pinata that size.


Nah. Bug bounties don't work for services like CDNs. Maybe they do elsewhere. But for enterprise services, the noise rate is too high, and the very good bug finders are either salaried, free, or working for the adversary.


I think I'd need to see some sort of evidence of this assertion. Bug bounties are commonly offered across a huge variety of online services, and they get results...not always, not necessarily consistently high quality, but even the giants (facebook comes to mind) have had reasonably serious bugs found by people seeking bounties.


He's not wrong about the noise level. I conducted a survey of the most notable bug bounties in 2014 and found that the largest companies either have ineffective programs or quickly scale teams to handle inbound reports full-time. There are security engineers at Google and Facebook who spend a majority of their time responding to, and triaging bug bounty submissions.

That said, I disagree that bug bounties don't work for CDNs. You can scale a bug bounty up, it just requires resources. Cloudflare has those resources, and part of it is a function of the reward tiers you offer.


Bounty researchers aren't the only quasi-rational economic actors in this sytem. Cloudflare, we might surmise, get enough benefit from their bounty program that they're willing to pay for its administration costs and the occasional T-shirt, but they don't see value in spending more than that.

More than that, access to the service is actually the limiting factor for good bug bounty results. Cloudflare's bug bounty, we might surmise, works as well as it does because anyone can sign up for a Cloudflare account for free. For an enterprise CDN, who won't talk to a potential customer without the prospect of an $x0,000+/year contract, everyone who has enough access to the service to, in the general course of business, find and submit meaningful reports is employed by a customer, and likely prohibited from accepting substantial rewards. Everyone else either doesn't have enough access to submit meaningful reports, or the bug is so bad (like this one) that they'll report it regardless.

Arguably this shows that Cloudflare and other CDNs are right in their calculations: Tavis disclosed this bug to Cloudflare without promise of a payout, or even a T-shirt. Might some good Samaritan on the Internet have noticed the bug and reported it earlier if the bounty was more substantial? Perhaps. But in responding to a vulnerability of this magnitude, you want to work with someone of Tavis's caliber, who has the good of all the stakeholders in mind, not a profit-motivated rando.


I'll gladly offer some anecdotal evidence:

We've got about 2500 tickets in our ticketing queue that have been filed over the past 8 months (excluding spam). Out of those 2500 tickets, only five are valid issues, and only one came with an actual write up.

The signal to noise ratio is absolutely awful - and it's not uncommon for people with invalid issues to demand that you pay them regardless.


Wow, that's much worse than I would have guessed. I would have assumed 10:1, tops. We get security reports, and sometimes they ask for a bounty, and only a very small number are bogus (but we don't have a formal bounty program). Less than half of our security issue reports are totally bogus, and another quarter are theoretical issues, but result in some sort of clean up in the code (e.g. no one can figure out how it could be exploited, but it gets refactored anyway).

I've been meaning to try a formal bounty program, as our software is a high value target (administrative tool running on over a million systems), but we're Open Source and don't have a lot of budget for bounties or anything else. If it produced hundreds of reports for every valid issue, it'd be counter-productive, for sure.


The bounty prices won't be the problem. The constant negotiation over 100,000 different variants of unchecked redirection and login fixation will be the issue. Time is money.

Hacker One should rename itself The Institute For Advanced Redirect Studies. I'm only partly kidding: bug bounty submitters are good at redirecting. Way better than I was before I started handling bounties. There's an interesting epistemological discussion to have about the low-value-yet-severity:critical bugs people file on bounty programs, because the level of cleverness required to exploit URL parsing differences between platforms is no less than what it takes to get an XSS bug.


It sounds like your system might be a candidate for https://wiki.mozilla.org/MOSS/Secure_Open_Source.

There's a form listed under "How to apply", and an email address nearby.

It appears that projects are only documented once audited, FWIW.


> Nah. Bug bounties don't work for services like CDNs. Maybe they do elsewhere. But for enterprise services, the noise rate is too high, and the very good bug finders are either salaried, free, or working for the adversary.

Yes, running a real bug bounty system requires professional security engineers and a professional security posture to sort through the noise. However, when the sole product you are selling is security (i.e. Cloudflare) you kind of have to admit it should be expected that they do so.

It isn't "too high", it simply requires a serious financial commitment to security in the terms of salaried security engineers.

As to your other point, No one works for free. Project Zero is paid for by Google. Security engineers are going to prioritize the purposes that make them real, hard cash.


Here's a question: what's the trade-off in terms of return on investment between hiring salaried security engineers to administer a bug bounty and hiring salaried security engineers to find bugs directly?

Parent's claim, as I read it, is that it's a better use of an enterprise CDN's money to hire security engineers to find bugs than to administer a bounty. Seems plausible to me. Where's that line?


> Parent's claim, as I read it, is that it's a better use of an enterprise CDN's money to hire security engineers to find bugs than to administer a bounty. Seems plausible to me. Where's that line?

Depends on the company, but tbpfh, most security engineers in a group tend to have a culture and that culture creates common blindspots. The fact they weren't testing for this sort of issue (i.e. parser memory leaks) is an example of something that seems obvious to some people that others ignore.

Maybe that is just my experience tho.


Facebook and Google have bug bounties. That's pretty big scale.


Facebook and Google are not, at base, enterprise services.


What would make sense (to me, not a business/marketing guy, nor a lawyer, at all) would be a t-shirt and free subscription as the offered thing, something which costs the company nothing.

Then for anything like this, give publically a bonus gift which makes it worth people reporting to them and not blackmarket selling it. Once it's gone through the legal dept. and so on.

Then they can be very quick with handing out tshirts and so on to any and every microissue report, without the people running triage having to care about amounts or tax or whatever.

Having any kind of publically offered payment for service (beyond a tshirt bounty or services in kind) is just begging for legal issues, right?


> Having any kind of publically offered payment for service (beyond a tshirt bounty or services in kind) is just begging for legal issues, right?

https://hackerone.com/coinbase ($500-$10k) or https://hackerone.com/uber ($500-$10k) or https://hackerone.com/facebook ($500-$10k) or dozens of others have no trouble with it.


The reward includes a t-shirt, it isn't a mere t-shirt. You also get "12 months of CloudFlare's Pro or 1 month of Business service on us" (~$200). The reward is also not tiered.

The award may still not be all that much, but let's not make things up about them.


That's still pretty much as silly as a tshirt. When a vulnerability was found in my hobby project I paid 200 to the reporter as a thanks. From my own pocket for my own open source program.


If I needed CF Pro though I'd already be on it.

I mean I guess it's good if you're already on Pro and could do with the freebie year but it's not really much to get the whitehats auditing your systems for free*

* free unless they find something


> The reward includes a t-shirt, it isn't a mere t-shirt. You also get "12 months of CloudFlare's Pro or 1 month of Business service on us" (~$200). The reward is also not tiered.

I've never put any of my sites behind Cloudflare precisely because I never had faith their WAF would always be bug free and I'm not comfortable with their MitM position.

Getting me to use your service on a time limited basis falls more under the category of "try-it-so-you-buy-it" marketing ploy than a real bonus to me. It benefits Cloudflare more than the researcher for that reason since if they use it, they'll be invested continuing to "help" Cloudflare since they'll be dependent on it.

I'm sorry, I just don't buy that is anything but a marketing ploy wrapped up as a bonus.


Can someone tell me the implications of this in laymen terms?

For instance what does it mean "sprayed into caches"? what cache? dns cache? browser cache? if the latter, does it mean you are safe if the person who owns that cache is an innocent non technical iser?


There are caches all over the Internet; Google and Microsoft run some of them, but so do virtually every Fortune 500 company, most universities, and governments all over the world.

The best way to understand the bug is this: if a particular HTTP response happened to be generated in response to a request, the response would be intermingled with random memory contents from Cloudflare's proxies. If that request/response happened through someone else's HTTP proxy --- for instance, because it was initiated by someone at a big company that routes all its traffic through a Bluecoat appliance --- then that appliance might still have that improperly disclosed memory saved.


PINBOARD!!!!!!!!! (It's a web-crawling & caching service.)


There are all kinds of places were things are cached, both on- and offline. Your data may end up in:

* Browser caches.

* Sites like wayback machine or search engines that make copies of webpages and save them.

* Tools that store data downloaded from the web, e.g. RSS readers.

* Caching proxies.

* the list goes on and on.

I think what tptacek wanted to say: It's just so common that people download things from the web and store them without even thinking much about it. And all those places where this happens now potentially can contain sensitive data.


Many mobile providers cache heavily as well. In my country, Vodafone does this.


Many services on the internet keep a copy of a page they have loaded in the past. Google does this, for example. It lets them do things like search across websites quickly.

Many of these caches are available online, to anyone who wants to look at them.

This bug meant that any time a page was sent through Cloudflare, the requester might receive the page plus some sensitive personal information, or credentials that could be used to log in to a stranger's account. Some of these credentials might let a bad actor pretend to be a service like Uber or Fitbit.

This very sensitive information might end up saved in a public cache, where anyone could find it and use it to do harm.


What are my rough odds of having stored a credential,if I were a provider?

What are the odds I had a credential stored?

We know the impact but what are the odds to a provider and to a possible exposeee?


It's reminiscent of the earlier days of the Squid cache.

When it had bugs and devivered up cached files the typical symptom was that everyone in the company got unwanted porn.

Because the biggest user (by far) of the 'net was the person into porn and so 90% of the Squid cache was porn.


It served the wrong resource instead of failing to serve a resouce? Back then, if I were to suffer this, what is the likelihood of a porn for cats experience?


Far worse than this. Yes, browser caches, but also web crawlers (like google)'s caches. This means that anyone who requested certain public content could have instead received secret content from completely unrelated websites.


As for the SHA-1 collision mentioned by jgrahamc[1] earlier today:

How am I going to explain this to my wife?

Actually a serious question. How do we communicate something like this to the general public?

[1] https://news.ycombinator.com/item?id=13713826


"It's like some extremely popular remailer company accidentally put badly or barely shredded copies of handled letters into other people's envelopes. Strangers' sensitive info is potentially sitting inside unsuspecting mailboxes worldwide."


> It's like some extremely popular remailer company accidentally put badly or barely shredded copies of handled letters into other people's envelopes.

Or used as confetti for a parade: http://www.npr.org/2012/11/27/166023474/social-security-numb...


> A significant number of companies probably need to compose customer notifications;

As a one-man company who has never done this before (and to the best of my knowledge never needed to): Any guides/examples to writing a customer notification for security ups like this? Or just recommendations? Thanks.


It's as easy as throwing a red banner on your website that explains the situation briefly and recommends that users change their passwords, if you take this more seriously you can force a password reset for all users. Depends on how sensitive the information that your users trust your site to hold is.


Email your customers, telling them to change their passwords, and link to some info about the leak. (in case they don't visit your website and miss seeing the security alert banner)

Advise them to change passwords for other services too, list sites possibly affected: https://github.com/pirate/sites-using-cloudflare/blob/master...


What a mess.

On the plus side, all those booter services hiding behind the Cloudflare are probably being probed and classified/identified/disabled by competitors and probably FBI. That is good.


> This is approximately as bad as it ever gets.

*as bad as it has ever gotten so far.


>Tavis found it by accident just looking through Google search results.

Curious whether there could be some automated way of preventing such a widespread cache poisoning in the future. Some ML trained on valid pages from a given domain?

Is it even possible to recover the original content of the documents or was the data randomly inserted into different parts?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: