Hacker News new | past | comments | ask | show | jobs | submit login
List of Sites Affected by Cloudflare's HTTPS Traffic Leak (github.com/pirate)
914 points by emilong on Feb 24, 2017 | hide | past | favorite | 215 comments



Just got this classy spam from dyn.com. Wonder if they're going through this list emailing every domain contact.

> As you may be aware, Cloudflare incurred a security breach where user data from 3,400 websites was leaked and cached by search engines as a result of a bug. Sites affected included major ones like Uber, Fitbit, and OKCupid.

> Cloudflare has admitted that the breach occurred, but Ormandy and other security researchers believe the company is underplaying the severity of the incident …

> This incident sheds light and underlines the vulnerability of Cloudflare's network. Right now you could be at continued risk for security and network problems. Here at Dyn, we would like to extend a helpful hand in the event that your network infrastructure has been impacted by today's security breach or if the latest news has you rethinking your relationship with Cloudflare.

> Let me know if you would be interested in having a conversation about Dyn's DNS & Internet performance solutions.

> I look forward to hearing back from you.


I suppose it could be seen as a response in kind after: https://blog.cloudflare.com/dyn-issues-affecting-joint-custo...

I would consider an email a bit of an escalation though, as opposed to a blog post.


The DYN attack affected Cloudflare customers, and we received a lot of support tickets that day. The blog post was more than warranted. The CEO and managers made sure the sales people weren't scummy in their tactics.


See the message I received as a Dyn customer... The sales people were scummy in their tactics.


I don't think the Cloudflare message is nearly as scummy as the Dyn messaging.

Dyn makes it seem like the entire underlying tech at Cloudflare is vulnerable.

P.S. I no longer work at Cloudflare, so I don't really care, just my .02


It may not be clear on that blog post, but internally, management stressed on multiple occasions immediately after the dyn incident to "don't be an asshole".


I've made no claim that Cloudflare did anything wrong with that blog post.

I can however see how Dyn themselves might not have liked it all that much. And in saying so, I think both the content and medium that Dyn chose in r1ch's post might be further contextualised.


I no great fan of Dyn, but I spend all day hearing "security is important", that my users and enterprises require security, and that I should do X, Y, and Z for security.

If Dyn delivers where Cloudflare does not....it's not something that can be ignored out of hand.

Maybe Dyn's email was rude maybe it wasn't, but I think perpetually too few people make real decisions based on security, and I'm a little forgiving of pushes in the other direction.


Is there some hard reason to believe that this kind of bug cannot happen on the Dyn stack? If all they have is "we got lucky, pick us instead" then they are just being scummy.


Here's the email I got from Cloudflare as a Dyn customer during the DDOS attacks:

> "As I'm sure you're aware, DDoS attacks on Dyn's network have caused massive outages for millions of sites. This prompted me to reach out on the behalf of Cloudflare to see if we can be helpful.

> Over the last 24 hours, we've been helping other Dyn customers migrate to Cloudflare to mitigate the risk.

> Who on your team would be the correct person to explore this with?"

It's not much better.


I think it's a bit more forgivable if they did it in response to this message. A good blend of cheeky and poking with a stick.


It's clever but feels at least a 3/10 shitty. Dyn is an old company and back in the day they provided free subdomains while nobody else did. I haven't used them recently because their pricing seems so high. How do others feel about them?


A few years ago I had Dyn hosting my DNS records for a website that had Amazon affiliate links. For some reason, one day someone at Amazon mistook those links for spam and contacted a service called MarkMonitor. MarkMonitor contacted Dyn and Dyn took down my account, shutting off access to the website. Without ever contacting me.

I was able to get my account reinstated after clearing the whole thing up with MarkMonitor and Amazon. I then removed all affiliate links and moved my DNS hosting elsewhere.

Never using Dyn again.


I still have a lifetime standard DNS subscription with them from back in the day when they were dyndns.org and you could physically mail them cash. The DNS hosting has been very solid (except for the day Mirai took them offline) but the standard query limits are way too low for any moderately trafficked website. All the managed DNS providers I looked at seem to have very restrictive query limits without a "enterprise - contact us" plan, one of the reasons why I decided to just do DNS-only on Cloudflare.


Why not he.net?


I use he.net DNS quite a lot for personal projects. I've had a few instances where DNS was not resolving that make me a bit cautious to move larger sites onto their service.


> I haven't used them recently because their pricing seems so high. How do others feel about them?

I feel about them that the free dynamic DNS option with a Namecheap registration offers everything Dyn did except built in options on cheap home routers.


Sorry bout that. Wasn't really even my idea, either.


They are strong competitors to each other, don't trust a word they say.


Coming from a company that regularly goes down to DDoS attacks :thinking:


Do you have any time in mind other than 2016-10-21?


I wrote this(1) script to check for any affected sites from local Chrome history. It checks for the header `cf-ray` in the response headers from the domain. It is not an exhaustive list but I was able to find few important ones like my bank site.

1: https://gist.github.com/kamaljoshi/2cce5f6d35cd28de8f6dbb27d...


I wish 1Password had a feature where you could put in a list of domains like this or a "Auto Change Possibly Compromised Passwords" feature.


Isn't this what Watchtower is supposed to be for? I have no idea if AgileBits is going to add this list to Watchtower, though.


Disclaimer: I work for AgileBits, makers of 1Password

I am helping comb through a list of sites to see which of those has suggested password updates.

I think we've had very few sites suggest updating the password though. Have you all seen any sites explicitly state users should update? If so I'd love a list so we can get them in Watchtower.

Kyle


I would subscribe to Watchtower, or, heck, probably even 1Password Famlies, if AgileBits took a more aggressive/proactive approach to password updates: 1. Prompt batch password resets for services whose vulnerabilities have been exposed before those exposed admit their breach to consumers (e.g. All services compromised by the Cloudflare dump - usually consumers are the last to know). 2. Preformed all password resets in a secure AgileBits browser (assuming the browser built-in to 1Password on iOS is secure). 3. Had an optional prompt to prevent unintentionally syncing/backing-up 1Password data, and accessing said secure browser without first confirming VPN status with the operating system (maybe even prompt user to connect to VPN before unlocking if VPN connection isn't established)

Alternatively, I'd subscribe for integration of a 1Password browser extension for a reputable (regularly high, multiplatform AV-Test/NSS performance) internet security service's browser, and Watchtower was as sensitive as mentioned above and integrated with said internet security service.

This is coming from a longtime 1Password Pro user (3-6 yrs) who recently got his family on the 1Password bandwagon.


Yep, I totally agree. Before I read this thread, I actually tweeted to 1Password to ask if they would be willing to produce some way for me to quickly cross reference the Cloudbleed sites against my 1Password vault.

I think Watchtower alerts should be based on any publicly known breach, not based on what the companies themselves say.

At this point, I consider any account using an affected Cloudflare service as potentially compromised.

This is a missed opportunity for 1Password to say that not only is your vault safe, but we will also help protect you from any potentially sites that had their data exposed.


I never received any notification from Watchtower to change password during linkedin hack, Dropbox hack and Yahoo hack. Apparently Watchtower was only supposed to notify you about Heartbleed vulnerability according to their website.

> 1Password Watchtower is a service that identifies websites that are vulnerable to Heartbleed, and will suggest which sites need to have their passwords changed.

https://watchtower.agilebits.com/


We update it all the time with new items as we see them announced. It won't contain all of them but whatever we stumble on or see in various places get added when there's an actionable thing a user can do.

We've added a handful of sites today that have suggested changing passwords after this announcement.

Kyle

AgileBits


As a happy customer of 1Password, it would be great to see you connect with Have I been pwned?[1] for watchtower notifications.

[1] https://haveibeenpwned.com/


I think you should suggest all sites that could have been compromised not just the few that suggest changing passwords.


Dashlane has that. Doesn't work with all sites though. But all major ones should work. See:

https://www.dashlane.com/features/password-changer

https://csdashlane.zendesk.com/hc/en-us/articles/202699281-H...

Supported sites: https://www.dashlane.com/en/password-changer-list


fwiw, I wrote a script in node that takes your 1Password exported URLs and checks them for cfduid and headers...not nearly as nice as if 1Password did this with Watchtower, but in the meantime...

https://github.com/weltan/cloudbleed-1password


LastPass sort of has that.


Here's a script for checking domains of saved logins in Firefox against the list of sites using Cloudflare:

https://gist.github.com/avian2/30db0d579732287d758c21ba8ded9...


I guess the bottom line is to change all passwords to be sure...


I don't suppose you or anyone else might know: is there any way this could be adapted to work with Firefox history?


In a similar vein, I wrote a very quick Powershell script to check domains from a LastPass CSV export - can share if anyone wants it.


Today I learned that uber does not have a change password option once you are logged in. You have to log out and pretend you forgot the password. Bad UX if you don't know.


Holy crap, really? I've been drawing up [profiles for the user account systems of a bunch of websites for the past few years][1], and I think I've only seen that once before (on a Washington State website, no less).

[1]: https://github.com/opws/domainprofiles


Fairly common pattern on non-tech-oriented sites, in my experience.


The downside of mobile first or mobile only for that matter. Normal web flows are downplayed. Not that this is excusable for a company of this size.


There is no logged-in access to password reset in the (android) mobile app either.


Worth noting this statement by Cloudflare CTO:

"I am not changing any of my passwords. I think the probability that somebody saw something is so low it's not something I am concerned about."

http://www.bbc.co.uk/news/technology-39077611


That kind of statement reminds me of this guy:

https://www.wired.com/2010/05/lifelock-identity-theft/


That statement must have given Cloudfare's lawyer an aneurysm.

Seems to me that management's attempt to downplay the problem exposes the company to as much risk as the original technical mistake.


That's terrible... he may as well say "As a representative of the company, I want it to be made clear that I don't treat security seriously".

In trying to downplay it, he's making the matter even worse.


*Article says COO, but Twitter says CTO. Strange.

And he's fairly active on these forums. That seems like such an odd thing to say given how important security is/should be at CF...curious if jgrahamc would further clarify his position here.


Agreed on the importance of security, but if his credentials from outside their network are able to be used in any significant way to impact their services or systems then they're doing something tragically wrong. For that matter if his credentials can be used anywhere to impact their services it's a failure.


As far as I can tell he's basically 2nd-in-command.


That seems like a reasonably unwise thing to say. It would be absolutely reasonable for someone to change there passwords after breach.

By citing a personal view point they're seeking to downplay the issue while providing little useful advice.


That seems a lot like something a company which was just implicated in a gigantic leak would say: damage control.


It's the modern version of the captain going down with the ship.


Aww man I submitted my list hours ago but I guess it never made it past the New page. https://github.com/pirate/sites-using-cloudflare

Original post: https://news.ycombinator.com/item?id=13720199


Hey! Super useful, thanks.

Quick question: news.ycombinator.com (as an example) is listed in the README as a potentially affected site, but I don't see it in the raw dump that I've downloaded. Am I crazy?


I suspect the raw dump is a list of sites that use the CloudFlare DNS servers, but HN uses a CNAME setup on their own authoritative DNS servers so it wouldn't appear in that list.


I fixed the uploaded list, it now appears in the text file.


That's a wide impact. While any hijacked account is bad, some of these are really bad.

For example, https://coinbase.com is on that list! If they haven't immediately invalidated every single HTTP session after hearing this news this is going to be bad. Ditto for forcing password resets.

A hijacked account that can irrevocably send digital currency to an anonymous bad guy's account would be target number one for using data like this.


coinbase is certainly one of the most concerning on that list- however they also support 2 factor authentication.


If you captured the right cookies though, you wouldn't need to log in with a password and be subject to OTP. That's why this is so problematic. Caveat: I haven't actually checked the details of Coinbase's session/security tokens.


This is true- but I'd assume all of these sites have flushed their session/cookie data by now.


I also noticed the domain waveapps.com, which is for Wave Accounting.


Cloudfare has advised that Wave data has not been affected/leaked. We've got engineering and security teams investigating, and we'll keep on it until we're ultra confident in the conclusion. Nonetheless, good practice for everyone to rotate all passwords today, for any services. Good security hygiene any time, and especially now.


How can they know that?

A broken web page could have been queried many, many times the last weeks and couldn't one of the responses contain Wave data?


Not 100% sure what their methodology is yet, and we're taking a cautious approach. At minimum, in the data that they've found in the wild, no Wave data was among it.


We're investigating


You missed the "possibly" in the header.

And the disclaimer right at the top:

This list contains all domains that use cloudflare DNS, not just the cloudflare SSL proxy (the affected service that leaked data). It's a broad sweeping list that includes everything. Just because a domain is on the list does not mean the site is compromised.


Affected sites leaked data from random other CF customers. So any site using CF regardless of settings could have leaked private data out there.


Sites using Cloudflare in DNS only mode won't have sent any requests that could be leaked.


Indeed, and it's pretty annoying having my site in that list despite not using CloudFlare's reverse proxy service. If my website handled user logins or sensitive data no doubt I'd have customers contacting me or shying away from my site now. This list needs more vetting.


Do you have a concrete suggestion for the list maintainer to better vet the list?

Can you prove that your site did not use the reverse proxy service at any point while the vulnerability was live?


> Can you prove that your site did not use the reverse proxy service at any point while the vulnerability was live?

This is a scenario where it's impossible to prove innocence. Even if somebody provided you with the logs of their DNS server to show that the website never pointed to CloudFlare, I doubt these logs were stored in a way that their authenticity could be proved. In any case, the onus of proof should almost always be on the accuser, not the accused.

Since you pressure me for a suggestion: my suggestion would have been to only list websites that were using the reverse proxy service (as opposed to DNS) at the time the data was captured. This can be done by inspecting the http response headers, or maybe even just checking the DNS records against known CloudFlare servers (as opposed to checking the DNS provider).

But since you point out the transience of this, this method, as well as the method used to gather the list as-is are fundamentally flawed. I think a better way would be to locate DNS dumps throughout the vulnerability period & apply the above method to those.


Thanks for answering!

Your last idea is a good idea but more work for the list editor. I'm not sure the motivations of the list editor, but if he or she is just an impartial volunteer (important assumption), it seems like it's really Cloudflare's responsibility to deliver a comprehensive report of affected sites, so that we don't have to guess?


For what it's worth, as part of work on the effects of DNS on Tor's anonymity [1] we visited Alexa top-1M in April 2016, recording all DNS requests made by Tor Browser for each site. We found that 6.4% of primary domains (the sites on the Alexa list) were behind a Cloudflare IPv4-address. However, for 25.8% of all sites, at least one domain on the site used Cloudflare. That's a big chunk of the Internet.

[1]: https://nymity.ch/tor-dns/


I wrote a simple website[1] to show if user have visited the websites included in the list automatically without browser plug-ins. It uses :visited CSS pseudo-class to highlight the site user have visited before. It is not 100% accurate, but it can be a fun way to quickly show people that they may visit sites on the list.

[1]https://cloudbleed.github.io/


I want to share this with my more non-techy persons, but, on Chrome, turning of uOrigin, their are no names of companies listed. I can hover over every block for the name, but.. is this an error, or intentional to just have a large heart-block with no labels?


Could you suggest how the names of companies should be shown so we can improve the website? Appreciate your reply :)


Just a right-side text that highlights once you mouse-over would be great. It may be too much to display all of names, but having it populate gives some idea of where you need to go.

Perhaps just every "red"/affected site could be populated on the right side :)

Appreciate the layout outside of naming labels however, nice work.


Amazing that this hack still exists


Webmasters and App-devs running on CloudFlare. You (at least) have to "force-logout" your users that have a "remember me" cookie set.

At least change the cookie name so the token stops working. For example, in ASP.NET - change the "forms-auth" name in the web.config file


If you do that, then an attacker could just use the same token with a different cookie name and access someones account. You NEED to invalidate the token.


> At least change the cookie name

This is bad advice, you should invalidate the tokens, that's it.


If I have an account on an affected site, but did not interact with the site (via my browser or through some other site with an API call) during the time period when the vuln was live, am I still at risk?


It seems very unlikely that you would be at risk, but there's some remote possibility that your past request data was in memory for some reason


This list doesn't appear to include sites that use a CNAME setup with CloudFlare -- i.e. sites on the Business or Enterprise plans that retain their authoritative DNS and use CNAMEs to point domains to a CloudFlare proxy.

There probably aren't many but with something this serious it could be important. I'm not sure how one would go about finding the sites that use the CNAME option. If it helps, they use a pattern like:

  www.example.com --> www.example.com.cdn.cloudflare.net
Hacker News is one such site, but it's listed in the "notable" section (it's not in the raw dump).


In an email from Cloudflare sent out this morning they said:

> In our review of these third party caches, we discovered data that had been exposed from approximately 150 of Cloudflare's customers across our Free, Pro, Business, and Enterprise plans. We have reached out to these customers directly to provide them with a copy of the data that was exposed, help them understand its impact, and help them mitigate that impact.

Does this jive at all with the Google or Cloudflare disclosures? They are claiming that across all caches they only found and wiped data from ~150 domains, can that be true?


Every single thing Cloudflare has said about impact has sounded very suspiciously optimistic to me. For example, they claim that they would know if an attacker had been intentionally exploiting this bug, but I've seen no details to justify their confidence.

So no, I don't think we can assume that the scope was as limited as they're making out.

Edit: As my coworker points out, as of 2013, they only kept 4 hours of access logs (source: https://blog.cloudflare.com/what-cloudflare-logs/). So basically their existing attack detection infrastructure (built without knowledge of this bug) may not have found anything suspicious, but it appears that that's the extent of what they can say about the last few months. They can claim that they found no evidence within the last week (one hopes that they stopped discarding logs when they found out about the attack), but if they want to convince us that they know this wasn't being exploited as late as two weeks ago, they need to provide specific evidence.


CloudFlare has no idea of knowing what was in uninitialized memory that was leaked. This is just spin.


That list isn't that useful... First of all, there is a LOT of pages hosted by CloudFlare @taviso acknowledged that in the original bug report. (https://bugs.chromium.org/p/project-zero/issues/detail?id=11...) Furthermore, you can't say which sites were hit by this bug and simply listing all CloudFlare sites is more or less fearmongering. If you are a verified victim of this bug CloudFlare will contact you. Lastly, if you want to be sure to mitigate effects of the attack just do it... If you want to be absolutely sure that your session keys etc will remain uncompromised simply repeal all active session cookies.


I think you're seriously underestimating cloudflares fuckup here.

Listing all CloudFlare proxied sites is exactly the right thing to do. Everyone seems to have been in the scope of the bug, and CF doesn't seem to have any good way of identifying affected customers.


While Cloudflare might contact their customers, it's no guarantee that the customers will actually notify their users, so I think this is a good way to find out which sites I might have to change my passwords and API keys on.


The email Cloudflare is sending out to customers where Cloudflare didn't find any cached info isn't particularly alarming: http://pastebin.com/pUnKJE3J

I wouldn't be surprised if people receiving this took no action.


Well, in the Google Zero Project issue ticket, the engineer said he felt Cloudflare tried to downplay the severity and it took them extra days and a lot of demanding from Google Zero Project team to finally get a draft (which from a legal and a company reputation PoV that makes sense; you need a lot of eyes on the draft before going out to the public).

I think not every "leak" is sensitive, but there are definitely instances Cf and Google both found very sensitive information.


> If you are a verified victim of this bug CloudFlare will contact you.

Where do you have that info from?


We are in the process of contacting customer who we are able had information cached by a search engine.


That's not the biggest risk. The biggest risk is that a malicious actor stumbled upon this bug, realized they could trigger it with specially crafted HTML, then wrote a script to harvest the data, which would be private data from any website with an active session in memory on the shared proxy. In that case, the bigger websites are more likely to be affected, because high traffic means they're more likely to have data stored in memory at any given time.

If you think this is implausible, consider just one persona who could do this;

- Someone turns clouflare https service on their website

- They check their pages and see some random data in the middle of a <p> tag

- They reproduce the bug. Then they reproduce it again. Then they script it.


I understand and we have been and are mining data we have to look for that having happened.


@jgrahamc: If this problem doesn't justify emailing all proxy service customers, what problem would?


We are emailing them all, but we are starting with those that we know had data cached by a search engine.


Honest question: why not start with those that had data moving over a vulnerable server?

Are you confident that Tavis was the first to discover it?


Has anything similar to this happened before?


No


@jgrahamc how can you even answer that question when you didn't detect the issue yourselves?

The email we received was a joke, OK great our domains 'weren't affected' in the sense of memory dumps weren't being injected into our HTML, and luckily we only proxy static images/html through CF so at worst a visitor's google analytics cookie could have been leaked, but on a personal level any person who has used any CF-proxied website (e.g. Uber) in the past few months is potentially affected.

Whether or not you think it's likely anyone discovered this earlier, the fact remains that private data is still in various public and private caches around the world. It's a monumental cock-up that will require every CF proxy customer to rotate keys, invalidate tokens and force mass password resets to ensure complete peace of mind for millions of consumers who will probably never hear about this issue even though their credit card information, passwords and private messages could be floating around the internet as part of a cached version of a website they've never even visited.

Even the way you're looking for cached data to find affected customers - yeah ok, for page x.com/y you found data for customer z.com, but what about the other million times that affected x.com/y page was loaded, that could be data from a million different customers that someone else (human or otherwise) saw, whether they realised what it was or not. And trust me there are more than a few people on the planet who would know _exactly_ what they were seeing.

Forget about shareholder value for a minute, please, because it's an absolutely fatal mistake for your company to downplay an issue like this.


I haven't for once thought about cost or shareholder value in the last week. Been working round the clock to clean up and evaluate impact.


Good to hear, I wasn't really trying to accuse, just frustrated at how downplayed this is for ordinary people - your customers' customers. Using language like affected sites when really you mean sites that dumped data about some unknown quantity of affected sites is already a source of confusion even on HN, let alone the wider world. I appreciate this isn't fun for you and your team right now too, so I do hope you've got lucky here and erased the worst of the damage before anyone malicious managed to get involved.


How about when Matty got hacked and 4chan was defaced? While the technical details differ, the situation itself was almost equally bad.


Is it not true that once upon a time a certain b1tch3z who like ac1d stole cloudflare user db?


this doesn't even begin to cover the possible scope of the leak does it?


Something I have a hard time understanding, is how Cloudfare's cache generator page had access to sensitive information ?

Were the 2 things running on the same process? If they were not, there's no way that the buffer overrun could read an other process memory, right? it would have failed with a segfault type of error.

If so, shouldn't Cloudfare consider running the sensitive stuff on a different process, so that no matter how buggy their caching engine is, it would never inadvertently read sensitive information?


It seems the parser is implemented as an nginx module, thus having access to its memory. CloudFlare terminates SSL in nginx and establishes SSL connections to their upstreams, but inside nginx everything is cleartext.

Private keys are not exposed because they are not stored locally (they use the Lua module to implement this).

Of course, AFAIK.


SSL connections were terminating at the proxy, so the proxy used plain HTTP to the web service backends.


Are you sure about this? Just because they terminate ssl on the proxy doesn't mean the traffic between the proxy and the web service backends was plain HTTP. That's certainly not how we do things.


Unsure about the random downvotes. It's CloudFlare's "Flexible SSL" offering. Granted, sibling non-speculative comments elaborate more thoroughly for the "non-flexible" cases.


My guess given their widespread use of Go is that each parser was a goroutine which uses the same process heap as other goroutines parsing other page requests.


Have you read their incident response? If you had, you would know they weren't using Go for this and it was actually an issue an a parser generated by ragel (C++) which was then used as an nginx module.


To be clear, it was mis-use of Ragel, not the fault of the Ragel module


This is ridiculous and somewhat irresponsible. This is just a list of domains using CloudFlare. The leak was only active under a set of very specific cases (email obfuscation, server-side excludes and automatic https rewrites).

I question Pirates (https://github.com/pirate) motives for even doing this? Karma? Reputation?


Only a few hundred sites were leaking, sure, but the leaked info could have come from any domain that was being proxied by the same edge server. So we would do better to assume that any domain that uses Cloudflare could have had their passwords and other sensitive info exposed via leaky neighbors.


I'm confused by the "not affected" remarks. I thought the issue was any site which passes data through cloudflare could be leaked by requests to a different site, due to their data being in memory. Have I misunderstood?


Inside of TLS, 1Password uses an additional SRP handshake that negotiates a static secret (like a DHE), which 1Password uses to both authenticate the user and set up an additional AES-GCM transport encryption.

So even a full memory dump of what's transported in TLS should, as long as it's properly implemented, only reveal an SRP authentication session and subsequently symmetrically encrypted data.

(And inside that SRP-negotiated encryption should only be more symmetrically encrypted vault items, and RSA-encrypted vault keys. If properly implemented even complete TLS breaks do not break 1Password at all, even the cloud version. Properly implemented being the key words of course.)


I typically think of "encryption inside of encryption" as a boondoggle more likely to somehow break things than make things stronger.

My confidence in that has dropped slightly in the past day.


I'm no expert, but intuitively it would seem that encryption-inside-encryption would be snake oil when they're meant to guard against the same layer/attack vector/threat model: for example, if you nest Serpent inside AES for a single local file encryption operation (ahem, TrueCrypt), that seems very gimmicky.

But if the encryption are supposed to protect separate and independent OSI layers or operation steps, then it would seem to me it's fully valid - specifically, in this case: - TLS dissolves inside HTTP-HTTPd endpoints or any reverse proxies, if used - additional SRP-negotiated AES dissolves inside client to the process doing key handling - final "actual" key encryption, handling at-rest encryption, and to make sure it's zero-knowledge to the storage handler

They seem to me to be guarding information leakage against very different parts of the key mangement process (storage, manipulation, http transport), and that doesn't seem to be snake oil to me.


>for example, if you nest Serpent inside AES for a single local file encryption operation (ahem, TrueCrypt), that seems very gimmicky.

I take it to be insurance against a future vulnerablility discovered in one of the algorithms. For some scenarios it seems like the cost can be worth it.

On the other hand, nested hashing has always seemed counter productive as it seems plausible that nesting hash functions can decrease the randomness of the image.


PGP email sent over TLS would be an example of nested encryption that people take for granted. So would E2E encrypted chat systems like iMessage, Signal, and WhatsApp, which encrypt each message on the machine, then use TLS to communicate with servers.


Interesting, thanks for the reply.

I wonder which password manager the original Project Zero thread referred to then if not 1Password.


It could be that it was 1password, but the only clear text was URL parameter names or JSON keys, with the values as encrypted strings in base64 or similar.


The update from 1password indicated that there was application layer encryption happening in addition to the TLS encryption, so a breach of the TLS protection did not expose any sensitive data. Presumably other sites are in similar situations. But don't take my word for it, go change all your passwords.


Any hosted password manager should be "host proof". They should not have the decryption keys and it should not be possible for them to disclose your unencrypted passwords no matter how careless they or their intermediaries are. They should be sending an encrypted blob over the wire which is only decrypted in your client app or browser when you enter the passphrase.


1Password said that even though they were not affected, they will still move away from Cloudflare due to bad optics.


> Presumably other sites are in similar situations.

Not to my understanding. 1password uses client-side encryption, using keys generated from your master password. This means that any data transmitted over the wire is already encrypted, whether over SSL or not.

Most other sites do not do this, at all, in any way. If you use a website that use'd CloudFlare's SSL termination, change your passwords, cancel your credit card (if you sent it to that site in the past few months, eg Uber/Lyft).

> go change all your passwords.

Yes, correct =].


If you'd seriously cancel your credit cards over this, I'd love to hear how you model that threat relative to all the other risks inherent in using a credit card anywhere (not just online).


Apparently root case was:

/* generated code */ if ( ++p == pe ) goto _test_eof;

"The root cause of the bug was that reaching the end of a buffer was checked using the equality operator and a pointer was able to step past the end of the buffer. This is known as a buffer overrun. Had the check been done using >= instead of == jumping over the buffer end would have been caught."

Detailed timeline:

"2017-02-18 0011 Tweet from Tavis Ormandy asking for Cloudflare contact information

2017-02-18 0032 Cloudflare receives details of bug from Google

2017-02-18 0040 Cross functional team assembles in San Francisco

2017-02-18 0119 Email Obfuscation disabled worldwide

2017-02-18 0122 London team joins

2017-02-18 0424 Automatic HTTPS Rewrites disabled worldwide

2017-02-18 0722 Patch implementing kill switch for cf-html parser deployed worldwide

2017-02-20 2159 SAFE_CHAR fix deployed globally

2017-02-21 1803 Automatic HTTPS Rewrites, Server-Side Excludes and Email Obfuscation re-enabled worldwide"

Seems like a pretty good response by cloudflare to me.


It's a good postmortem (describes WHAT happened), but it doesn't really communicate the impact to Cloudflare customers or their end users (describe WHY people should care).


I've been tinkering with a Python notebook for a few minutes to try to quickly assess how much of my LastPass vault is affected:

https://gist.github.com/dikaiosune/0ca7829884b3b3f790418f0f1...

Improvements welcome.

One interesting thing: the raw dump that's linked from the list's README doesn't seem to include a couple of notable domains from the README itself, like news.ycombinator.com or reddit.com. I may be mangling the dump or incorrectly downloading it in some way.

EDIT: disclaimer, be responsible, audit how the dump is generated, etc etc etc


Authy is on the list. It would be really nice if they confirmed whether they are vulnerable or not, considering they hold all of my 2FA tokens. Otherwise I'll have to re-key the database.


Wouldn't you rather just do that as a precaution? Then you won't be constantly worried that they might have had their data leaked.


I wrote a python script to help check your LastPass database for any potentially affected sites.

https://github.com/RidleyLarsen/cloudbleed_check_lastpass


Is there a "standard" in the works for changing a password? Stuff like this is happening rather too frequently for my taste. I need a tool I can use to update all my passwords everywhere automatically and store the new ones in my password manager.


I agree. This is about time password managers and "changing password" should be standardized. Password Managers should now be supported across the board. Not just browsers.



Dashlane has that feature too.


I ginned up this little tool tonight to help people out instead of grepping.

https://bleed.cloud/index.html

Sorry for the index.html, trying to figure out how to get index file to work on cloudfront.

You can also run the python script on the website anonymously on your computer to dig sites out of your email, which is a good indicator that you have an account with them.


I have hundreds of passwords in my password manager. That's going to take a week, considering I also have to work.



No, it's keepassx on my laptop. I don't trust my passwords to somebody else.


You might find this script I wrote to be helpful:

https://github.com/nandhp/misc-utils/blob/master/keepassx_do...


So why do you have to change them all?


My cleartext passwords could have been dumped into the responses of some other site together with my user names. That's the gist of this incident.


I still don't understand why you have to change every password (are not they supposed to be all different in a password manager?). Of course if you are super extra mega careful then change them all...


pmontra doesn't have to change every password, just every password used by a website using Cloudflare (thus the purpose of the list of affected sites).


Even if your password manager is not compromised, the credentials of so many sites is potentially leaked that you should probably still update a substantial number of passwords.

I hope 1Password's Watchtower service will soon give hints.


Disclaimer: I work for AgileBits, makers of 1Password.

I am combing through site lists trying to find sites that are impacted. However, I am only updating based on sites that have suggested that they be updated. Very few are suggesting password updates right now. But, there are a few that are in Watchtower now.

Lists like the one this link to are useful, but they don't actually say if a site was indeed impacted. In the end we may just treat this like Heartbleed and suggest they all get updated but for now, we're trying to only suggest it where necessary.

If you're aware of any that have suggested password changes please let me know and I'll get them added immediately :)

Kyle

AgileBits


Even if it is, what does it have do to with all those hundreds of sites?


Same here. I just spent the last 30 minutes changing passwords at my most critical sites. Banking, email, VPS providers, etc.

I figure I’ll slowly work through the rest in the coming weeks as time allows.


Same here... And this made me realize how painful it can be to change password on some website.


This list seems to be missing any sites that are using custom nameservers, which would be common on top sites using the enterprise plans. A better way to detect if the proxy is being used would be to resolve the IP and see if it lies in Cloudflare's subnets.


And, I've found several of my domains on this list.. Some of which don't host web content etc and only use cloudflare for DNS. The list is currently ~4.3mil entries, which honestly feels like a rather low figure. I have no data to back up my gut feeling though ;)

Anyway, I'm OK with them being on this list, as I believe understanding the scope of the problem is important to figuring out how we prevent these kinda problems in the future.. (For example, answering this question requires understanding who uses CloudFlare: Why are so many sites concentrated on a single infrastructure?)


Thanks for posting and curating this list.


Oh crap. I've entered my banking password into Transferwise quite a few times.

Welp, time to change all my passwords.


> Welp, time to stop using the same password for multiple services.

> Welp, time to start using a password manager.

FTFY


OP isn't saying they used the same password for transferwise as for their bank. Transferwise allows you to log into your internet banking and authorize a transaction through their site. You actually give them your internet banking password, regardless of how you log into their site.

Which is pretty strange in itself, to trust a 3rd party with your internet banking password, but that's how it works.


This is the main reason I've never used Mint.


Folks should review their banks' policies before doing this.

For example Bank of America won't hold customers liable for fraudulent transfers or bill pay transactions through their website, but sharing your online ID and password seems to void that protection.

https://www.bankofamerica.com/onlinebanking/online-banking-s...


Yes, definitely a good suggestion.

I already use Lastpass, which makes regenerating all my passwords a little easier.


Do browsers still leak history info (eg http://zyan.scripts.mit.edu/sniffly/) is it possible to have a page show visitors if they are likely to be affected?


What if I sign in with facebook or other? Should I change muy password con facebook or what?


TL;DR? You should be ok...

Long Version.

That (most likely) would of used oauth. So instead of sending your FB password to the site to log you into FB with. You give your FB password (if your not signed in) to FB and then facebook give the site using "sign in with Facebook" a token they can use with facebook to get account info / do actions on your FB account.

Now depending on which "sign in with" system you used then often the code handed back to the site (via a callback URL handled by the client) is a single use code. So once the site using "sign in with" has used the code with FB they get another set of tokens they will use with Facebook directly.

After the initial "sign in with" process the Facebook tokens are most likely never handed to clients (because they often need to be mixed with a site secret during requests to the likes of Facebook).

So you _should_ be ok if you used a decent "sign in with" system like facebooks as the only thing that would of been handed back to the client and then sent from the client to the site is that single use code. The communication with the site and facebook would of used an API endpoint.

Now... If you used another sites (not facebook) "sign in with" system and their API is also behind Cloudflare it could well be that some API keys could be in a cache somewhere. If those requests were signed with secrets you should be fine because without the sites secrets to lets say create a hmac signature for the request then while there might be some personally identifiable information in caches somewhere the signatures should of already expired meaning that they can't be used in say a replay attack and the data cached can't be used to create fresh requests.

BUT this all depends on everyone doing things right, which may not be the case. But either way, oauth tokens are often not revoked when you change your password. I.e. you might change your FB password and then still be able to auto login on somesmallsite.org because the tokens shared between FB and somesmallsite.whatever haven't changed.


Thanks for this answer, its perfect!


Couldn't find a practical description of who is affected anywhere. Is it just the customers who have Cloudflare HTTPS proxy service being affected, or anyone using Cloudflare DNS is affected?


Anyone who passes HTTP or HTTPS traffic via CloudFlare might have had that data leaked into other users sessions.


Has Cloudflare fixed the issues? Should I update passwords now or wait?


Yes, they've fixed the issue, so it's safe to change your passwords now. But expect some websites to prompt you to change your passwords again once they disclose the breach.


It would be more useful if there was a way to see sites that actually were using the Cloudflare features that caused this bug. A large number of sites use Cloudflare, but few should have been affected by this bug:

> When the parser was used in combination with three Cloudflare features—e-mail obfuscation, server-side excludes, and Automatic HTTPS Rewrites—it caused Cloudflare edge servers to leak pseudo random memory contents into certain HTTP responses. https://arstechnica.com/security/2017/02/serious-cloudflare-...


As has been mentioned elsewhere on HN, those 3 features were capable of triggering the bug. Once triggered, potentially any Cloudflare-enabled site could have been affected.


You only needed one service triggering the involved module in the CF proxy, and all traffic going through it would be affected, regardless of which feature each account had enabled. This over three months.

I think even CF struggled to find all affected sites - which is proven by the amount of stuff still in google cache, after 7 days of purging. Unless they keep three months of logs listing all sites that used each and every proxy, you cannot be 100% certain of which traffic was affected.


Unfortunately this seem to include news.ycombinator.com


> This list contains all domains that use cloudflare DNS, not just the cloudflare SSL proxy (the affected service that leaked data). It's a broad sweeping list that includes everything. Just because a domain is on the list does not mean the site is compromised.

In this case, HN , IIRC, does not use the proxy.

Checking the certs, CloudFlare reissue using DigiCert, I think, whereas HN is using a Comodo cert.


You can upload your own cert to CloudFlare.

Hacker News does hit the CF proxy and was affected.

  $ host news.ycombinator.com
  news.ycombinator.com is an alias for news.ycombinator.com.cdn.cloudflare.net.
  news.ycombinator.com.cdn.cloudflare.net has address 104.20.44.44
  news.ycombinator.com.cdn.cloudflare.net has address 104.20.43.44


The list of websites once again reminds me of what avenue Q immortalised in song: the internet is for porn


Just received an email from Glidera, a Bitcoin exchange. This is the first service to ask me to reset my password. I wonder why Uber, NameCheap, FitBit, and many others have yet to warn their users? Is Cloudflare downplaying this?

> Hi [Username],

> A bug was recently discovered with Cloudflare, which Glidera and many other websites use for DoS protection and other services. Due to the nature of the bug, we recommend as a precaution that you change your Glidera security credentials:

> Change your password > Change your two-factor authentication

> You should similarly change your security credentials for other websites that use Cloudflare (see the link below for a list of possibly affected sites). If you are using the same password for multiple sites, you should change this immediately so that you have a unique password for each site. And you should enable two-factor authentication for every site that supports it.

> The Cloudflare bug has now been fixed, but it caused sensitive data like passwords to be leaked during a very small percentage of HTTP requests. The peak period of leakage is thought to have occurred between Feb 13 and Feb 18 when about 0.00003% of HTTP requests were affected. Although the rate of leakage was low, the information that might have been leaked could be very sensitive, so it’s important that you take appropriate precautions to protect yourself.

> The actual leaks are thought to have only started about 6 months ago, so two-factor authentication generated before that time are probably safe, but we recommend changing them anyway because the vulnerability potentially existed for years.

> Please note that this bug does NOT mean that Glidera itself has been hacked or breached, but since individual security credentials may have been leaked some individual accounts could be vulnerable and everyone should change their credentials as a safeguard.

> Here are some links for further reading on the Cloudflare bug:

> TechCrunch article: https://techcrunch.com/2017/02/23/major-cloudflare-bug-leake... > List of sites possibly affected by the bug: https://github.com/pirate/sites-using-cloudflare/blob/master...

> If you have any questions or concerns in response to this email, please contact support at: support@glidera.io



I would like to point out that, if most sites used two-factor authentication, this leak would be at most a minor inconvenience. Maybe we should push for that more. Just days ago I talked to Namecheap about its horrible SMS-only 2FA and asked them to implement something actually secure, maybe contact your favorite site if they don't have 2FA yet.


If you setup TOTP (Authenticator) while this bug was out in the wild your shared secret key could have leaked. SMS would actually be safer than TOTP in this scenario.


That's insightful. So you shouldn't only reset your passwords, but your TOTP setup too (if you set it up in this period).

I think it's a flaw of TOTP though. The client secret should be client generated and should never leave the device.


Yes, unfortunately both the client and server need to have that shared private key to generate the same codes.

Transmitting the key over a 'secondary' channel would have protected people here.

It begs the question of whether or not TOTP is really 2FA if it is setup using a single channel of communication.


What ? No, that shouldn't be possible (unless the site operator massively fucked up, or there's some attack that feasibly lets you go from one or more TOTP values to the likely shared secret).

TOTP works by having (as you said) a shared secret key, and both sides calculate an HMAC of the secret along with the timestamp, and then just modulo the resulting HMAC by 10^6 (usually, for TOTPs with six digits).

Your google authenticator (or whatever) app does this HMAC on the shared secret + time, you type in six digits, and the other side does the same HMAC and verifies that they're the same. If they're UX-conscious, they might decide to HMAC the previous and next 30 second periods as well and compare those too, to account for small amounts of time skew.


parent said "if you setup TOTP". If you did that, you somehow communicated the shared secret key to your authenticator, presumably from a web page. Which could have leaked.


ah, thank you, that makes more sense.


2FA doesn't protect you against cookie/token stealing. The website owners need to invalidate all of that on their ends.


It does and you don't even need real 2FA for it.

Any non-trivial auth token comes with device or host fingerprinting. That's enough to stop this attack scenario in most cases.


Usually only the big tech companies and well-funded startups can afford to use device fingerprinting in addition with auth tokens. This essentially involves keeping track of the last time you logged in, your IP address, your device characteristics then notifying if there is an unusual change in any of those metrics.

For instance, although I have never been to China, I once got a notification from Facebook that someone attempted a password reset on my account from China. This was shortly after the publication of LinkedIn's stolen database of users which affected millions of users including my account.


As someone unfamiliar with this, can you please elaborate? Would the host be fingerprinted on every subsequent usage of the authentication token, and using what methods?


On a basic level, you can include a IP or a country inside your authentication tokens. That's enough to block some unwanted access.

On a more advanced level, there is a two step process, you authenticate as usual with your password and get a token, then the site will authenticate your device.

The device fingerprinting is totally transparent, it saves and checks some characteristics from your computer, and ensure you come from the same device next time.

For instance, on Facebook you can see a list of known device somewhere. When you connect on a new computer it sends you an email "connected from a new computer is that you?".


Not the parent but I posted a plausible explanation to your question here: https://news.ycombinator.com/item?id=13731656


Good luck with that, people have been requesting for TOTP support on Namecheap for the past 3 years.

https://www.namecheap.com/support/knowledgebase/article.aspx...

https://blog.namecheap.com/two-factor-authentication/

Here's a list of alternatives someone asked a month ago:

https://news.ycombinator.com/item?id=13484739


Worth noting that authy is on the list. So if you synced your authy authenticators during that time, it's possible all your totp secrets leaked. One would hope authy encrypts those keys (I believe they do) client side, but yiiiiikes, I'm thinking about getting a yubikey.


Followup - authy's blog post confirmed what I thought: they do encryption of all the user-entered TOTP secrets on the client side and don't store decrypt keys. So anyone who was able to intercept your secret store would still have to decrypt it, which is (hopefully) Quite Difficult.

link: https://www.authy.com/blog/security-notice-authy-response-to...


Can you explain how 2FA would have helped?


Unless the web site was paranoid enough to encrypt your password client-side before sending it to the server, it's possible the password was leaked.

With 2FA, your password and a one-time code were leaked and cached somewhere, but in order to log in as you today, an intruder would need to know a new code. And they wouldn't, unless you happened to set up your time-based one-time password (TOTP, e.g. Google Authenticator) during this timeframe as almost_usual mentioned. The reason here is because it's possible the secret key was leaked, so now someone can generate the same numbers your app is generating.

xuki mentions that 2FA doesn't protect you against stolen bearer tokens, which is another issue. The usefulness of a stolen token depends if it's expired or not. If you haven't, force a signout of all your sessions to invalidate old tokens (and change your passwords).


What is the timeframe where setting up TOTP is vulnerable? I haven't been able to find an indication of how long this bug has been in production.


Cloudflare shares a timeline on their blog post:

  The three features implicated were rolled out as follows.
  The earliest date memory could have leaked is 2016-09-22.

  2016-09-22 Automatic HTTP Rewrites enabled
  2017-01-30 Server-Side Excludes migrated to new parser 
  2017-02-13 Email Obfuscation partially migrated to new parser 
  2017-02-18 Google reports problem to Cloudflare and leak is stopped

  The greatest potential impact occurred for four days starting
  on February 13 because Automatic HTTP Rewrites wasn’t widely
  used and Server-Side Excludes only activate for malicious
  IP addresses.
https://blog.cloudflare.com/incident-report-on-memory-leak-c...


Do I need to change my cloudflare password?


Would Internet Archive able to "cache" the leaks?


this is another data point that supports my personal, hare-brained theory that the expectation of privacy on the internet is simply naive, a fool's errand. it never existed, and never will.

this is despite (or maybe because) of my best efforts to secure systems as a major part of my job.


Volusion.com


The title is misleading (for now). It is just a list of all sites using CF, compromised or not.


All sites are compromised. Anything which was in memory, which could be any site, would be spewed out. Regardless of which site was used as the trigger.


A quote from the page:

> This list contains all domains that use cloudflare DNS, not just the cloudflare proxy (the affected service that leaked data).


"List of Sites possibly affected"

Sites using Cloudflare, really. However, Cloudflare say that only sites using three page rules were affected - email obfuscation, Server-side Excludes and Automatic HTTPS Rewrites. [1]

Is this over-estimating the impact, perhaps?

[1] https://blog.cloudflare.com/incident-report-on-memory-leak-c...


No! And this is why cloudfare's poor write up continues to confuse people. Sites with those features triggered the bug. Once the bug was trigerred the response would include data from ANY other cloudfare customer that happened to be in memory at the time. Meaning a request for a page with one of those features could include data from Uber or one of the many other customers that didn't use those features. So the potential impact is every single one of the sites using CloudFare. Not over-estimated at all.


The email Cloudflare is sending out to their customers has a pretty "no big deal" tone as well: http://pastebin.com/pUnKJE3J

I assume there's a separate email for sites where they happened to find Google cache data, but...


Ah, that makes sense. Thanks for clearing it up for me.


Does traffic from different sites flow through the same server process on CF? E.g., can the following sequence occur?:

1. a request hits a site that doesn't use any of those features, but loads juicy data into memory temporarily; the memory is dealloc'd, but is now "primed"

2. a request hits a site that uses those features, triggers the bug, and leaks the data from step #1.

Said differently, my reading of the CF blog is that only sites using those three page rules trigger the bug, but that is distinct from being affected by it. (The affected site is the one in the uninitialized memory; the site using the rules is in the initialized memory being processed.)


Your sequence is correct. The bug was triggered at the proxy level, in an nginx module.


As I understand the issue, the leaked data might be from any other site using Cloudflare caching.

But only requests to sites using the features you mention, will have leaked data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: