Hacker News new | past | comments | ask | show | jobs | submit login

It is far from over, too! Google Cache still has loads of sensitive information, a link away!

Look at this, click on the downward arrow, "Cached": https://www.google.com/search?q="CF-Host-Origin-IP:"+"author...

(And then, in Google Cache, "view source", search for "authorization".)

(Various combinations of HTTP headers to search for yield more results.)




> The infosec team worked to identify URIs in search engine caches that had leaked memory and get them purged. With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory. Those 770 unique URIs covered 161 unique domains. The leaked memory has been purged with the help of the search engines.

So I tried it too, and there's still data cached there.

Am I misunderstanding something - that above statement must be wrong, surely?

They can't have found everything even in the big search engines if it's still showing up in Google's cache, let alone the infinity other caches around the place.

EDIT: If the cloudflare team sees I see leaked credentials for these domains:

android-cdn-api.fitbit.com

iphone-cdn-client.fitbit.com

api-v2launch.trakt.tv


I'm also seeing a ton from cn-dc1.uber.com with oauth, cookies and even geolocation info. https://webcache.googleusercontent.com/search?q=cache:VlVylT...


That's terrifying.

Thanks to Uber now requiring location services on Always instead of just when hailing a car, my and others' personal location history even outside of Uber usage could have been compromised. Sweet.


To be fair, you were kind of a fool if you actually let Uber have your location at all times. As soon as they announced that I blocked Uber from my location. I only allow it when I take an Uber (which is almost never now).


Sometimes I'm in a rush and forget to turn it back to Never.

That doesn't make me a fool, it makes me human. Don't be a jerk. It's a dark pattern for a reason.


If you only sometimes forget, then that's not letting them have your location at all times, and you weren't called a fool.


Not a fool but ...


At least the location isn't embarrassing.[1]

[1] https://goo.gl/maps/FjQVttcZCpH2


Oh my gosh, that's the Ivey Business School, where I graduated from last year. I didn't expect this to hit so close to home...


so sorry for your loss


What did it show before it was taken down? In vague terms, of course.


Could someone enlighten me on why malloc and free don't automatically zero memory by default?

Someone pointed me to MALLOC_PERTURB_ and I've just run a few test programs with it set - including a stage1 GCC compile, which granted may not be the best test - and it really doesn't dent performance by much. (edit: noticeably, at all, in fact)

People who prefer extreme performance over prudent security should be the ones forced to mess about with extra settings, anyway.


Some old IBM environments initialized fresh allocations to 0xDEADBEEF, which had the advantage that the result you got from using such memory would (usually) be obviously incorrect. The fact that it was done decades ago is pretty good evidence that it's not about the actual initialization cost: these things cost a lot more back then.

What changed is the paged memory model: modern systems don't actually tie an address to a page of physical RAM until the first time you try to use it (or something else on that page). Initializing the memory on malloc() would "waste" memory in some cases, where the allocation spans multiple pages and you don't end up using the whole thing. Some software assumes this, and would use quite a bit of extra RAM if malloc() automatically wiped memory. It would also tend to chew through your CPU cache, which mattered less in the past because any nontrivial operation already did that.

I personally don't think this is a good enough reason, but it is a little more than just a minor performance issue.

That all being said, while it would likely have helped slightly in this case, it would not solve the problem: active allocations would still be revealed.


> Some old IBM environments initialized fresh allocations to 0xDEADBEEF, which had the advantage that the result you got from using such memory would (usually) be obviously incorrect.

On BSDs, malloc.conf can still be configured to do that: on OpenBSD, junking (fills allocations with 0xdb and deallocations with 0xdf) is enabled by default on small allocations, "J" will enable it for all allocations. On FreeBSD, "J" will initialise all allocations with 0xa5 and deallocations with 0x5a.


> What changed is the paged memory model: modern systems don't actually tie an address to a page of physical RAM until the first time you try to use it (or something else on that page). Initializing the memory on malloc() would "waste" memory in some cases, where the allocation spans multiple pages and you don't end up using the whole thing. Some software assumes this, and would use quite a bit of extra RAM if malloc() automatically wiped memory. It would also tend to chew through your CPU cache, which mattered less in the past because any nontrivial operation already did that.

Maybe an alternative approach is to simply mark the pages to be lazily zeroed out when attached, in the Page Table Entries of the MMU. They wouldn't be zeroed out at the time of the call malloc(), but only when they are attached to a physical memory location (the first time you use it).


And it seems to me the OS should ensure the pages are zero'd out rather than user space (via malloc()) doing it, because it's still a security hole to let a process read data that it's not supposed to have access to (whether it's from another process or the kernel - it doesn't matter).


OS already zeroes out pages, obviously. But malloc doesn't usually request memory to the OS but takes a chunk from the already allocated heap.


Unsure, not my job. But I read stuff along those lines. A modern OS plays all sorts of games to delay doing work. Allocate a couple of megs of memory and the OS sets up some pointers in a page table. And yes it'll keep already zero'd pages handy. And mark pages as dirty to be scraped clean later.


It doesn't need to affect your CPU cache, because x64 processors have non-temporal writes (streaming stores) that bypass the cache.

The stuff about eagerly allocating pages is spot on though.

There is calloc which allocates and zeroes memory, but people don't use it as often as they should.


Parsers don't usually need to hold onto what they're parsing for a very long time, so unless they were running this parallel on a machine with 4k cores, I'd imagine it would be much more likely that a buffer overrun hits the middle of an already-freed allocation rather than going into an active one.

In terms of "wasting" memory, perhaps the kernel could detect that you are writing 0s to a COW 0 page and still not actually tie the page to physical RAM. (If you're overwriting non-0 data, well it's already in a physical page.)

I don't quite follow the details of the CPU cache issue and why that is more-than-minor.

I do think in this day and age we should be re-visiting this question seriously in our C standard libraries. If the performance issues are actually major problems for specific systems, the old behaviour could be kept, but after benchmarking to show that it really is a performance problem.


In terms of "wasting" memory, perhaps the kernel could detect that you are writing 0s to a COW 0 page and still not actually tie the page to physical RAM.

Writing to your COW zero page causes a page fault. Now, in theory you could disassemble the executing instruction and if it's some kind of zero write, just bump the instruction pointer and go back to userspace - but then the very next instruction in your loop that zeroes the next 8 bytes will cause the same page fault. And the next. And the next...

Taking a page fault for every 8 bytes in your allocation is completely infeasible. You'd be better off taking the hit of the additional memory usage.


How about this idea: free() zeros or unmaps all memory it allocated. This shouldn't fault. The OS zeros pages when mapping them into the process space (which it should do anyway). I think that solves the problem.


free() doesn't know what portion of the memory you allocated actually got written to. So for the model where a large, page-spanning buffer is allocated and only a small portion used, this approach causes many unnecessary page faults at free () time as it tries to zero out lots of memory that was never used or paged in at all.


Large buffers just get unmmaped so the OS can fix that problem.


An invariant you get from most kernels is that all new memory pages are zeroed when mapped into processes (normally through mmap or sbrk), so you only have the paging problem when initializing with a value other than zero.


Zeroing on malloc and/or free would not have prevented this type of error, since the information disclosure was due to an overflow into an adjacent allocated buffer.

However, zeroing on free is generally a useful defense-in-depth measure because can minimize the risk of some types of information disclosure vulnerabilities. If you use grsecurity, this feature is provided by grsecurity's PAX_MEMORY_SANITIZE [0].

[0]: https://en.wikibooks.org/wiki/Grsecurity/Appendix/Grsecurity...


Zeroing on alloc/free probably wouldn't have helped much with this bug. Data in live allocations would still be leaked.


> Could someone enlighten me on why malloc and free don't automatically zero memory by default?

The computational cost of doing so, I suspect.


Just like why most filesystems don't zero deleted files.


Neither of these are good reasons: I already talked about MALLOC_PERTURB_ (man mallopt) in my post and my naive performance tests, and we rarely get bad security holes based on data from deleted files left on filesystems.


Unfortunately, people write microbenchmarks of malloc and free a lot (and not completely without reason: they do quite often show up high in profiles).

For example, binary-trees on the Benchmarks Game is basically malloc/free bound (or at least is supposed to be as Hans Boehm originally designed it). Likewise, most JavaScript benchmarks (V8 splay, for example) are heavily influenced by raw allocation performance. Many people choose browsers and programming languages based on relatively small differences in these results. All of the incentives align in favor of performance, not security, because performance is easy to measure and security is not.


You asked for a reason, not for a good reason.

malloc/free were designed around 1972. That was a time where performance was much more important and security concerns didn't really exists.

Modern systems, like Go, do zero-out newly allocated memory because they do consider a bit more security to be more important than a bit more performance.

But changing the defaults of malloc/free is not really an option and it would probably break stuff.

Especially on Linux, where, I believe, malloc returns uncommitted pages, which increases the perf advantage in some cases.

Security conscious programmers can use calloc() or write their own wrappers over malloc/free.


they aren't good reasons now. They were good reasons ~20 years ago.

language spec should probably now default to zeroing memory unless you specifically ask it not to....and maybe that should be a verbose option :)


Are these results hardware independent? Maybe it makes a difference on older machines, or different architectures.


I imagine clearing memory on free is more relevant than MALLOC_PERTURB_?


calloc zeroes memory on allocation.


Yes, I think the question was something like "why doesn't malloc call calloc?".


Always nice to have options. Not zeroing memory on allocation might save a few cpu cycles.


It's pretty much the definition of false economy. Would you rather save a few cycles or suffer debilitating security bugs at random intervals? Always use calloc unless a) there's a proven performance problem and b) you know for a fact that due to careful inspection/static analysis/black magic malloc is safe. Then use calloc anyway because why risk it?


It depends on the size of the chunk of allocated memory. If it is quite large, time spent zeroing it can be substantial. Then again, if you're allocating in performance critical path, you're doing it wrong anyways.


It takes time to do that.


> that above statement must be wrong, surely?

Either they believe it's right, which means they're not competent enough to really assess the scope of the leak; or they don't believe it, but they went "fuck it, that's the best we can do".

In either case, it doesn't really inspire trust in their service.


you missed one possibility: that they're deliberately attempting to downplay the severity to make themselves look less incompetent


jgrahamc: can you list which public caches you worked with to attempt to address this? It does not inspire confidence when even google is still showing obvious results


Google, Microsoft Bing, Yahoo, DDG, Baidu, Yandex, and more. The caches other than Google were quick to clear and we've not been able to find active data on them any longer. We have a team that is continuing to search these and other potential caches online and our support team has been briefed to forward any reports immediately to this team.

I agree it's troubling that Google is taking so long. We were working with them to coordinate disclosure after their caches were cleared. While I am thankful to the Project Zero team for their informing us of the issue quickly, I'm troubled that they went ahead with disclosure before Google crawl team could complete the refresh of their own cache. We have continued to escalate this within Google to get the crawl team to prioritize the clearing of their caches as that is the highest priority remaining remediation step.



Thousands of years from now, when biological life on this planet is all but extinct and superintelligent AI evolving at incomprehensible rates roam the planet, new pieces of the great PII pollution incident that CloudFlare vomited across the internet are still going to be discovered on a daily basis.


I was expecting this:

Thousands of years from now, when biological life on this planet is all but extinct and superintelligent AI evolving at incomprehensible rates roam the planet, taviso will still be finding 0-days impacting billions of machines on an hourly basis.

Be glad that Google is employing him and not some random intelligence agency.


I have huge respect for taviso and his team. Their track record in security work is so impressive. They are without a doubt extremely capable.

However, I am always wondering: are they really globally unique in their work and skill? So that they are really the ones finding all the security holes before anyone else does because they are just so much better (and/or with better infrastructure) than anyone else? Or is it more likely that on a global scale there are other teams who at least come close regarding skill and resources, but who are employed by actors less willing to share what they found?

I really do hope Tavis is a once-in-a-lifetime genius when it comes to vulnerability research!


One of the big conservatories in the infosec world are people who sell 0-day exploits to "security companies." Some go for the tens of thousands of dollars. Ranty Ben talked about how some people live off this type of income, when it came up in a panel discussion at Ruxcon 2012.


No he is definitely not alone, some of them work for other security companies, for antivirus companies, some of them are selling found vulnerabilities


What's funny is he kinda just stumbled upon this bug accidentally while making queries.

If I were just casually googling two weeks ago and came across a leaked cloudflare session in the middle of my search results I think I would have vomited all over my desk immediately. Dude must have been sweating bullets and trembling as he reached out on twitter for a contact, not knowing yet how bad this was or for just how long it's been going on.




I believe the 2009 Yahoo-Bing agreement is still in force, where Bing provides search results on Yahoo.com:

http://news.bbc.co.uk/2/hi/business/8174763.stm

I know the search I performed now on Yahoo states "Powered by Bing™" at the bottom.


Yeah, I thought that could be it as well but was at the bottom of the Yahoo result:

<!-- fe072.syc.search.gq1.yahoo.com Sat Feb 25 03:58:27 UTC 2017 -->

Given they are identical results it's pretty clear it must be a shared index I suppose, that or the leaked memory was cached.


Yahoo provides a front end to the search results, Bing provides the crawl/search/archives.


What the hell does Yahoo even do anymore? Just email? Or is that just a proxy to hotmail?


Finance, News, Mail, Fantasy Sports, etc to name a few where they are still in the top three of the category.

Yahoo was never really a search company (even its founding, it was a "directory", not a "search"). Sure, they pretended fairly well from 2004ish (following their move off Google results) to 2009 (when they did the Bing deal), but the company never really nailed search or more importantly search monetization despite acquiring one of the first great search engines (Altavista) and the actual inventor of the tech Google stole for its cash cow Adwords (Overture).


Isn't Yahoo search just a frontend to bing nowadays?


Some IPv6 internal connections, some websocket connections to gateway.discord.gg, rewrite rules for fruityfifty.com's AMP pages, and some internal domain `prox96.39.187.9cf-connecting-ip.com`.

And some sketchy internal variables: `log_only_china`, `http_not_in_china`, `baidu_dns_test`, and `better_tor`.


Exactly, it looks that the cleaning people up to now only looked for the most obvious matches (just searching for the Cloudflare unique strings). There's surely more where "only" the user data are leaked and are still in the caches.


The event where one line of buggy code ('==' instead of '<=') creates global consequences, affecting millions, is great illustration of the perils of monoculture.

And monoculture is the elephant in the room most pretend not to see. The current engineering ideology (it is ideology, not technology) of sycophancy towards big and rich companies, and popular software stacks, is sickening.


How about clearing all the cache? (Or at least everything created the last few months.)

I've never seen anyone suggest it, I suppose It cannot or should not be done for some reason?


You are asking for deleting petabytes of data. Some sides are interested in owning such data.


The real problem is going to be where history matters and you can't delete - for example archive.org and httparchive.org. There is no way to reproduce the content in the archive obviously, so no one will be deleting it. The only way is to start a massive (and I mean MASSIVE) sanitization project...


or clearing all the cache of Cloudflares website. I think that's do-able.


At this moment problem is not in Cloudflare's side, search engines crawled tons of data with leaked information, even though Cloudflare drops their caches, data is already in 3rd party servers (search engines, crawlers, agencies)


That's why he asked that the caches of all Cloudflare sites are dropped, not by Cloudflare but by these 3rd parties.


That might work. If said 3rd parties were interested in helping. Most of them might be but it just takes one party refusing to help and then you've still got the data out there.


no I meant, get a list of all domains using Cloudflare, get that removed from the cache of Crawlers.


Offtopic: "with all due respect" is often followed by words void of respect.


He is British. "With all due respect" means no respect is due. I don't think it's possible to show less respect while appearing polite. In other words, them's fighting words.

http://todayilearned.co.uk/2012/12/04/what-the-british-say-v...


This is perfectly fine if the amount of respect due is sufficiently low.


Given the answers that cloudflare is giving I's say it's quickly approaching zero.


Ha! Excellent point!


Incredible. Are they really trying to pin it on Google? Yes, clearing cache would probably remove some part of the information from public sources. But you can never clear all cache world-wide. Nor can you rely that the part that was removed was really removed before being copied elsewhere.

The way I see it, time given by GZero was sufficient to close the loophole, it was not meant to give them chance to clear caches world-wide. They have a PR disaster on their hands, but blaming Google won't help with it.


You really have to see this to really grasp the severity of the bug.


The scope of this is unreal on so many levels.

20 hours since this post and these entries are still up ...


Can anyone provide some context please ?


For anyone being linked directly to the post: the link back to the parent page is right on top: https://news.ycombinator.com/item?id=13718752

You can also click on "parent", and repeat as necessary.


The bottom of the file has contents from another connection. Notably

    HTTP/1.1
    Host gateway.discord.gg



After 16 hours, those cached pages are still up...


While it is good that you discovered leaked content is still out in the wild, your tone is somewhat condescending and rude. No need for it.


You might not know the history here. Tavis works at Google and discovered the bug. He was extremely helpful and has gone out of his way to help Cloudflare do disaster mitigation, working long hours throughout last weekend and this week.

He discovered one of the worst private information leaks in the history of the internet, and for that, he won the highest reward in their bug bounty: a Cloudflare t-shirt.

They also tried to delay disclosure and wouldn't send him drafts of their disclosure blog post, which, when finally published, significantly downplayed the impact of the leak.

Now, here's the CEO of Cloudflare making it sound like Google was somehow being uncooperative, and also claiming that there's no more leaked private information in the Bing caches.

Wrong and wrong. I'd be annoyed, too.

--

Read the full timeline here: https://bugs.chromium.org/p/project-zero/issues/detail?id=11...


I think this is a one-sided view of what really happened.

I can see a whole team at Cloudflare panicking, trying to solve the issue, trying to communicate with big crawlers trying to evict all of the bad cache they have while trying to craft a blogpost that would save them from a PR catastrophe.

All the while Taviso is just becoming more and more aggressive to get the story out there. 6 freaking days.

short timeline for disclosures are not fun.


There was no panic. I was woken at 0126 UTC the day Tavis got in contact. The immediate priority was shut off the leak, but the larger impact was obvious.

Two questions came to mind: "how do we clean up search engine caches?" (Tavis helped with Google), and "has anyone actively exploited this in the past?"

Internally, I prioritized clean up because we knew that this would become public at some point and I felt we had a duty of care to clean up the mess to protect people.


> "has anyone actively exploited this in the past?"

Has this question been answered yet?


We're continuing to look for any evidence of exploitation. So far I've seen nothing to indicate exploitation.


>> "has anyone actively exploited this in the past?"

Wouldn't your team now even have to decide how to deal with this even after some specific well known caches have been cleared? I mean there's no guarantee that someone may not have collected all this data and use it to target those cloudflare customer sites. Are you planning to ask all your customers to reset all their access credentials and other secrets?


Google Project Zero has two standard disclosure deadlines: 90 days for normal 0days, and 7 days for vulnerabilities that are actively being exploited or otherwise already victimizing people.

There are very good reasons to enforce clear rules like this.

Cloudbleed obviously falls into the second category.

Legally, there's nothing stopping researchers from simply publishing a vulnerability as soon as they find it. The fact that they give the vendor a heads-up at all is a courtesy to the vendor and to their clients.


> The fact that they give the vendor a heads-up at all is a courtesy to the vendor and to their clients.

It is the norm, and it is called responsible disclosure. You're trying to do the less harm, and the less harm is a combination between giving some time to the developers to develop a fix and getting the news out there for customers and customers of customers to be aware of the issue.


With all due respect, they should suffer a pr catastrophe.


In this case I feel your comment is misdirected. Cloudflare was condescending in their own post above in which he was replying to- "I agree it's troubling that Google is taking so long" is a slap in the face to a team that has had to spend a week cleaning up a mess they didn't make. It is absolutely ridiculous that they are shitting on the team that discovered this bug in the first place, and to top it all off they're shitting all over the community as a whole while they downplay and walk the line between blatantly lying and just plan old misleading people.


I would be pretty mad if a website that I was supposed to trust with my data made an untrue statement about how something was taken care of, when it was not, and then publish details of the bug while cache it still out in the wild, and now exploitable by any hacker who was living under a rock during the past few months.


Actually I proxy two of my profitable startup frontend sites with CloudFlare, so I am affected (not really), but giving them the benefit of the doubt as they run a great service and these things happen.


They are well past deserving the benefit of the doubt.

I would also advise you notify your cloud-based services' customers how they might be affected (yes really), trust erosion tends to be contagious.


Agreed. The condescending downplaying tones displayed just aren't acceptable.


We only host our static corporate sites (not apps) and furthermore never used CF email obfuscation, server-side excludes or automatic https rewrites thus not vulnerable.


Hi,

I think you have misunderstood the issue. Just because YOU did not use those services does not mean your data was not leaked. It means that other peoples data was not leaked on YOUR site, but YOUR data could be leaked on other sites that were using these services.


We only host our static corporate sites (not apps)

If this part is true, they're not vulnerable. Only data that was sent to CloudFlare's nginx proxy could have leaked, so if they only proxy their static content, then that's the only content that would leak.

The rest of their comment gives the wrong impression though, yeah.


> Only data that was sent to CloudFlare's nginx proxy could have leaked, so if they only proxy their static content, then that's the only content that would leak.

The way it worked, the bug also leaked data sent by the visitors of the these "static sites": IP addresses, cookies, visited pages etc.


Thanks for clarifying. You are absolutely right.


So far as I know, nothing like this thing has ever happened at any CDN ever before.


There have definitely been incidents where CDNs mixed up content (of the same type) between customers. Not exactly like this, but close.


I find it troubling that the CEO of Cloudflare would attempt to deflect their culpability for a bug this serious onto Google for not cleaning up Cloudflare's mess fast enough.

Don't use CF, and after seeing behavior like this, don't think I will.


On a personal note, I agree with you.

Before Let's Encrypt is available to public use (beta), CF provided "MITM" https for everyone: just use CF and they can issue you a certificate and server https for you. So I tried that with my personal website.

But then I found out that they replace a lot of my HTML, resulting mixed content on the https version they served. This is the support ticket I filed with them:

  On wang.yuxuan.org, the css file is served as:

  <link rel="stylesheet" title="Default" href="inc/style.css" type="text/css" />

  Via cloudflare, it becomes:

  <link rel="stylesheet" title="Default" href="http://wang.yuxuan.org/inc/A.style.css.pagespeed.cf.5Dzr782jVo.css" type="text/css"/>

  This won't work with your free https, as it's mixed content.

  Please change it from http:// to //. Thanks.

  There should be more similar cases.
But CF just refuse to fix that. Their official answer was I should hardcode https. That's bad because I only have https with them, it will break as soon as I leave them (I guess that makes sense to them).

Luckily I have Let's Encrypt now and no longer need them.


Well, the CEO does have beef with Google: https://blog.cloudflare.com/post-mortem-todays-attack-appare...

This led to Cloudflare refusing to implement support for Google Authenticator for 4 years.


lol, really? Google authenticator is just TOTP - it's an open standard. That seems childish.

Also, the notion that the CEO of an internet company would have a "beef with Google" is pretty funny.


This comment greatly lowers my respect for Cloudflare.

Bugs happen to us all; how you deal with this is what counts, and wilful, blatant lying in a transparent attempt to deflect blame from where it belongs (Cloudflare) onto the team that saved your bacon?

I've recommended Cloudflare in the past, and I was planning, with some reservations, to continue to do so even after disclosure of this issue. But seeing this comment? I don't see how I can continue.

(For the sake of maximum clarity: I take issue: 1) with the attempt at suggesting the main issue is in clearing caches, not on the leak itself. It doesn't matter how fast you close the barn door after the horse is gone and the barn has burned down. 2) With the blatantly false claim that non-Google caches have been cleared, or were faster to clear than Google's. Cloudflare should know, better than anyone, the massive scope of this leak, and the fact that NO search engine's cache has or could be cleared of this leak. If you find yourself in a situation so bad you feel like you need to misdirect attention to someone else, and it turns out no one else is actually doing anything so you have to like about that...maybe you should just shut up and stop digging?)


Hey! Don't keep the horse locked in if the barn is burning!


> I agree it's troubling that Google is taking so long.

Google has absolutely no obligation to clean up after your mess.

You should be grateful for any help they and other search engines give you.


You're right, I guess. (Disclaimer: Not affiliated with any company affected / involved)

But I still find it troubling. Is it their mess? No. Does it affect a lot of people negatively - yes. I expect Google to clean this up because they're decent human beings. It's troubling because it's not just CloudFare's mess at this point.

It reminds me of the humorous response to "Am I my brother's keeper?", which is "You're your brother's brother"


Google cleaning this up is going to take a ton of man-hours, which will cost a LOT of money. How much money is Google obligated to spend to help a competitor who fucked up? Are they supposed to just drop everything else and make this the top priority?


I don't see this as them as helping a competitor. The damage has been done (in terms of customer relations).

I view leaving up the cached copy of leaked data as being a jerk move - not towards CloudFare, but to anyone whose data was leaked.

This is an opportunity for Google to show what they do with rather sensitive data leaks - do they leave them up or scrub them?

Had damage from the leak been aleady done (to those whose data it was)? Probably. Even taking that into account, I think the Google search comes off as a jerk in this situation.


I feel like you are operating under the assumption that deleting this leaked data is trivial, that they just have to hit a delete button and the data is gone.

This is not the case; it is not obvious, trivial, or easy to delete the leaked data. It is not simple to find it all. This is not like they are being given a URL and being asked to clear the cached version of it; they are being asked to search through millions of pages for possibly leaked content.


I despise the way you've dealt with this issue with as much dishonesty as you thought you could get away with.

I will be migrating away from your service first thing Monday. I will not use you services again and will ensure that my clients and colleagues are informed of you horrific business practices now and in the future.


Next time, beware of parsers. Or formally verify them :)

https://arxiv.org/pdf/1105.2576.pdf

(disclaimer: co-author)


For this who haven't been following along, this is the CEO of CloudFlare lying in a way that misrepresents a major problem CloudFlare created. Additionally, they are trying to blame parts of this problem on those that told them about the problem they created.


At least tell me they got their t-shirts lol.


>I'm troubled that they went ahead with disclosure before Google crawl team could complete the refresh of their own cache.

It sounded like they (cf) were under a lot of pressure to disclose ASAP from project zero and their 7 day requirement...


eastdakota is one of the cloudflare guys, so "they" in that sentence can only refer to Google (see also the previous paragraph/sentences, where eastdakota used "we" for cloudflare).


He's the CEO


With something this drastic, 7 days was generous.


>> We have continued to escalate this within Google to get the crawl team to prioritize the clearing of their caches as that is the highest priority remaining remediation step.

If you are using the same attitude as you use in this comment, with their team, i'm pretty sure they will be thrilled to keep aside all their regular work and help you out cleaning up a enormous mess created by a bug in your service.


Oh wow, taking a shit on Google after they helped you by reporting a critical flaw in your infrastructure.

I'm no longer using CF for my own projects, but you've just cemented my decision that none of my clients will either.


https://webcache.googleusercontent.com/search?q=cache:lw4K9G...

    Internal Upstream Server Certificate
    ...
    /C=US/ST=California/L=San Francisco/O=Cloudflare Inc./OU=Cloudflare Services - nginx-cache/CN=Internal Upstream Server Certificate
That really doesn't look good.


Just to point out, this is apparently a cert used for communicating between Cloudflare's services which has (presumably) been replaced. Cloudflare customer's certs weren't exposed.


Correct. That's that cert.


Just to be clear: is this a cert used for authenticating with Cloudflare's systems or just for encryption? If used for authentication, you need to ensure it hasn't been stolen and used before this was found by P0.


Lol, Google just purged that search.

EDIT: but there's still plenty of fish: http://webcache.googleusercontent.com/search?q=cache:lw4K9G2...

This will take weeks to clean, and that's just for Google.

EDIT2: found other oauth tokens, lots of fitbit calls... And this just by searching for typical CF internal headers on Google and Bing. There is no way to know what else is out there. What a mess.


Ouch, you really see everything :

> authorization: OAuth oauth_consumer_key ...

what a shit show. I'm sorry but at that point there must be consequences for incompetence. Some might argue "But nobody can't do anything" ...

I'm sorry, CF has the money to to ditch C entirely and rewrite everything from the ground up with a safer language, I don't care what it is, Go,Rust whatever.

At that point people using C directly are playing with fire. C isn't a language for highly distributed applications, it will only distribute memory leaks ... With all the wealth there is in the whole Silicon Valley, trillions of dollars, there is absolutely 0 effort to come up with an acceptable solution? all these startups can't come together and say: "Ok,we're going to design or choose a real safe language and stick to that"? where does all that money goes then? Because this bug is going to cost A LOT OF MONEY to A LOT OF PEOPLE.


These guys were probably saved by using OAuth - there is a consumer secret (which the "_key" is just an identifier for) and an access token secret, both of which are not sent over the wire. Just a signature based on them. (The timestamp and nonce prevent replay attacks.)

OAuth2 "simplified" things and just sends the secret over the wire, trusting SSL to keep things safe.


Does this have anything to do with CloudFlare's ambitious attempt to be the first service to proxy your https traffic to your users?

Perhaps the largest MITM ever eh?


This actually happened because they started to rewrite it all, according to their blog post.


Started to re-write it...in C


Good. They're trying to clean up all the private data leaked everywhere. I tempted to say "why couldn't they figure out this google dork themselves" but they've probably been slammed for the past 7 days cleaning up a bunch of stuff anyway.


You have no idea.


The effort you're putting into cleaning up someone else's mess cannot be understated, nor can it be sufficiently appreciated. Thanks!


Any chance you can describe why these cached pages missed the purge that cloudflare initiated? Seems like cloudflare should have brought an outside expert to try to exploit this issue before the disclosure was made.


For vulnerabilities with immediate exploit exposure, where people are currently being victimized by the flaw, Project Zero has a 7-day embargo.

The short waiting period balances the vendor's interest in coordinating the smoothest fix to the problem with the public's interest in knowing its exposure and maximizing it's options for reacting to the exposure.

The fixed waiting period keeps the process sane. Every vendor you'll ever disclose a serious vulnerability to will try to delay disclosure, usually repeatedly. If you set a precedent of making arbitrary exceptions, you'll never be able to stare anyone down.

Again: as the reporters, you're trying to balance the vendor's interests with those of the public. Your credibility in these situations is pretty important, not just for this vulnerability, but for the next ones. With P0, we all know there will be a long series of "next ones" to be concerned about.


I definitely understand the embargo, but this is one of those situations where the vuln was already fixed and it's likely very few malicious actors (possibly 0, but of course who knows) were aware of its existence.

I feel like adding even just another day or two would've allowed them to purge more of these search results. I think that would greatly outweigh the increased risk of letting it remain undisclosed for slightly longer.


Thank you for your thoughtful reply and realize the difficult situation you are in.


Hah, no, my situation is super easy; it is "partisan bystander." I don't work for Google.


FYI, I'm seeing some more of these results show up (with active caches) for the following searches:

"CF-RAY" "CF-Force-Miss-TS"

"X-SSL-Server-Name"

"Internal Upstream Server Certificate0"


CF-RAY isn't internal and will show up in any CloudFlare hosted site's response headers.


I'm aware of this, but combined with "CF-Force-Miss-TS" that search was turning up a number of clear examples of cached Cloudflare memory data.


Your hard work is appreciated.


Not sure if you'll see this, but I've noticed that the cache links have been removed on literally all hits for these queries.

And yet, I occasionally see working cache links on relevant unaffected pages.

Really, really awesome to see this kind of response. It's an obvious course of action (also considering corporate liability that you're publicly holding/offering this data) but it's really cool to see everyone work to fix this en masse so quickly.

I think a lot of people would enjoy hearing campfire battle stories of the past ~week once this is all over.


Thank you for all your hard work.


> This will take weeks to clean, and that's just for Google.

Couldn't Google just purge all cached documents which match any Cloudflare header? This will probably purge a lot of false positives, but it's just cached data, so would that loss really matter? My guess is that this approach should not take more than a few hours on Google's infrastructure.

Of course, this leaves the problem of all the other non-Google caches out there.


OAuth1 doesn't send the secrets with the requests, just a key to identify the secret and a signature made with the secret.

OAuth2 does send the secret, typically in an "Authorization: Bearer ..." header.

The uber stuff that somebody else linked to looks like a home-grown auth scheme and it appears that "x-uber-token" is a secret, but hard to know for sure.


So while people are having fun here with search queries, how many scripts are already up and running in the wild, scraping every caching service they can think of in creative ways for useful data...

This is an ongoing disaster, wasn't this disclosed too soon?


The "well-known chat service" mentioned by Tavis appears to be Discord, for the record.

edit: Uber also seems to be affected.


>It is a snapshot of the page as it appeared on Feb 21, 2017 20:20:45 GMT

So the issue wasn't fully fixed on Feb 19, or Google's cache date isn't accurate?


It seems like the reasonable thing for Google to do is to clear their entire cache. The whole thing. This is the one thing that they could do to be certain that they aren't caching any of this.


What about Bing, Baidu, Yandex, The Internet Archive, and Common Crawl? What about caches that are surely maintained by the NSA, ФСБ, and 3PLA?


Of course. Google dumping their cache puts only a small dent into the problem, but I feel that it's their responsibility to the innocent site operators caught in the middle of this.


Cloudflare's incompetence isn't Google's responsibility, particularly when Google wiping out their caches and damaging their own search results doesn't fix the problem. Hackers know how to use more than one search engine.


That only gives them an excuse to do nothing about this. All those companies should immediately go ahead and update any data that could have possibly leaked + inform their customers.


CF should be thankful Google is doing any of this, clearing their entire cache would cost Google $ to index web from scratch.


That might be a bit too extreme. But they should do something quickly to try to find all of these.


I would say cloudflare should hire them to try to find them. It's really not on google IMO (unless caching has some implications regarding storing sensitive data).



Wow, I just tried this, the first result with a google cache copy has a bunch of the kind of data described. Although there was only one result with a cache.


The second page had a result with an OAuth2 Bearer token in it.


PII, OAuth data, etc.


I've so far seen an oAuth key for fitbit (via their android app) and api keys for trakt (though apparently that service doesn't use them?)

I don't know, this just seems catastrophic.


I searched for

"CF-Host-Origin-IP:" token

.... uhm is that what I think I'm seeing???


The first couple I looked at were requests to Uber and Fitbit...


One of my Uber rides two weeks ago went completely nuts. Both my and my drivers app screwed up at the same time and I was never picked up and then seconds later the app claimed I reached my destination.

You have to wonder whether something like this is implicated.


That's one phenomenal leap of logic there. Why would you think that?


Merely that both my and the drivers app screwed up at the same time, and have a good chance of hitting the same Uber end-point.

Apps that consume APIs would be more sensitive to unexpected junk than browsers.


But there are so many other much more likely reasons why something like that would have happened, it is quite a leap to think that it is somehow related to this issue.


Without disagreeing, can you give me an example.

And it's just a speculation. Shrug.


One simple explanation could be the road was between very large concrete buildings or the area has some sort of GPS interference (there is one place in Tokyo that jumps my GPS and probably others' by about 300m to the same location every time). Another simple explanation is the software has a bug on when it thinks you arrive in some extremely bizarre scenario (hence you both had it happen simultaneously).

I don't know how it works in the back so this is all speculation of course.


Yep, but I'd already taken an Uber ride from the exact same place the day before. And everything went smoothly.


Probably not.

If someone knew about this exploit they're not going to be messing with people's Uber rides for lulz.


I wasn't implying intention.


this is quite bad. i hope google can put some effort in clearing it's cache too


Time to find out where various "booter" sites are actually hiding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: