> As you may be aware, Cloudflare incurred a security breach where user data from 3,400 websites was leaked and cached by search engines as a result of a bug. Sites affected included major ones like Uber, Fitbit, and OKCupid.
> Cloudflare has admitted that the breach occurred, but Ormandy and other security researchers believe the company is underplaying the severity of the incident …
> This incident sheds light and underlines the vulnerability of Cloudflare's network. Right now you could be at continued risk for security and network problems. Here at Dyn, we would like to extend a helpful hand in the event that your network infrastructure has been impacted by today's security breach or if the latest news has you rethinking your relationship with Cloudflare.
> Let me know if you would be interested in having a conversation about Dyn's DNS & Internet performance solutions.
> I look forward to hearing back from you.
I would consider an email a bit of an escalation though, as opposed to a blog post.
Dyn makes it seem like the entire underlying tech at Cloudflare is vulnerable.
P.S. I no longer work at Cloudflare, so I don't really care, just my .02
I can however see how Dyn themselves might not have liked it all that much. And in saying so, I think both the content and medium that Dyn chose in r1ch's post might be further contextualised.
If Dyn delivers where Cloudflare does not....it's not something that can be ignored out of hand.
Maybe Dyn's email was rude maybe it wasn't, but I think perpetually too few people make real decisions based on security, and I'm a little forgiving of pushes in the other direction.
> "As I'm sure you're aware, DDoS attacks on Dyn's network have caused massive outages for millions of sites. This prompted me to reach out on the behalf of Cloudflare to see if we can be helpful.
> Over the last 24 hours, we've been helping other Dyn customers migrate to Cloudflare to mitigate the risk.
> Who on your team would be the correct person to explore this with?"
It's not much better.
I was able to get my account reinstated after clearing the whole thing up with MarkMonitor and Amazon. I then removed all affiliate links and moved my DNS hosting elsewhere.
Never using Dyn again.
I feel about them that the free dynamic DNS option with a Namecheap registration offers everything Dyn did except built in options on cheap home routers.
I am helping comb through a list of sites to see which of those has suggested password updates.
I think we've had very few sites suggest updating the password though. Have you all seen any sites explicitly state users should update? If so I'd love a list so we can get them in Watchtower.
Alternatively, I'd subscribe for integration of a 1Password browser extension for a reputable (regularly high, multiplatform AV-Test/NSS performance) internet security service's browser, and Watchtower was as sensitive as mentioned above and integrated with said internet security service.
This is coming from a longtime 1Password Pro user (3-6 yrs) who recently got his family on the 1Password bandwagon.
I think Watchtower alerts should be based on any publicly known breach, not based on what the companies themselves say.
At this point, I consider any account using an affected Cloudflare service as potentially compromised.
This is a missed opportunity for 1Password to say that not only is your vault safe, but we will also help protect you from any potentially sites that had their data exposed.
> 1Password Watchtower is a service that identifies websites that are vulnerable to Heartbleed, and will suggest which sites need to have their passwords changed.
We've added a handful of sites today that have suggested changing passwords after this announcement.
Supported sites: https://www.dashlane.com/en/password-changer-list
"I am not changing any of my passwords. I think the probability that somebody saw something is so low it's not something I am concerned about."
Seems to me that management's attempt to downplay the problem exposes the company to as much risk as the original technical mistake.
In trying to downplay it, he's making the matter even worse.
And he's fairly active on these forums. That seems like such an odd thing to say given how important security is/should be at CF...curious if jgrahamc would further clarify his position here.
By citing a personal view point they're seeking to downplay the issue while providing little useful advice.
Original post: https://news.ycombinator.com/item?id=13720199
Quick question: news.ycombinator.com (as an example) is listed in the README as a potentially affected site, but I don't see it in the raw dump that I've downloaded. Am I crazy?
For example, https://coinbase.com is on that list! If they haven't immediately invalidated every single HTTP session after hearing this news this is going to be bad. Ditto for forcing password resets.
A hijacked account that can irrevocably send digital currency to an anonymous bad guy's account would be target number one for using data like this.
A broken web page could have been queried many, many times the last weeks and couldn't one of the responses contain Wave data?
And the disclaimer right at the top:
This list contains all domains that use cloudflare DNS, not just the cloudflare SSL proxy (the affected service that leaked data). It's a broad sweeping list that includes everything. Just because a domain is on the list does not mean the site is compromised.
Can you prove that your site did not use the reverse proxy service at any point while the vulnerability was live?
This is a scenario where it's impossible to prove innocence. Even if somebody provided you with the logs of their DNS server to show that the website never pointed to CloudFlare, I doubt these logs were stored in a way that their authenticity could be proved. In any case, the onus of proof should almost always be on the accuser, not the accused.
Since you pressure me for a suggestion: my suggestion would have been to only list websites that were using the reverse proxy service (as opposed to DNS) at the time the data was captured. This can be done by inspecting the http response headers, or maybe even just checking the DNS records against known CloudFlare servers (as opposed to checking the DNS provider).
But since you point out the transience of this, this method, as well as the method used to gather the list as-is are fundamentally flawed. I think a better way would be to locate DNS dumps throughout the vulnerability period & apply the above method to those.
Your last idea is a good idea but more work for the list editor. I'm not sure the motivations of the list editor, but if he or she is just an impartial volunteer (important assumption), it seems like it's really Cloudflare's responsibility to deliver a comprehensive report of affected sites, so that we don't have to guess?
Perhaps just every "red"/affected site could be populated on the right side :)
Appreciate the layout outside of naming labels however, nice work.
At least change the cookie name so the token stops working. For example, in ASP.NET - change the "forms-auth" name in the web.config file
This is bad advice, you should invalidate the tokens, that's it.
There probably aren't many but with something this serious it could be important. I'm not sure how one would go about finding the sites that use the CNAME option. If it helps, they use a pattern like:
www.example.com --> www.example.com.cdn.cloudflare.net
> In our review of these third party caches, we discovered data that had been exposed from approximately 150 of Cloudflare's customers across our Free, Pro, Business, and Enterprise plans. We have reached out to these customers directly to provide them with a copy of the data that was exposed, help them understand its impact, and help them mitigate that impact.
Does this jive at all with the Google or Cloudflare disclosures? They are claiming that across all caches they only found and wiped data from ~150 domains, can that be true?
So no, I don't think we can assume that the scope was as limited as they're making out.
Edit: As my coworker points out, as of 2013, they only kept 4 hours of access logs (source: https://blog.cloudflare.com/what-cloudflare-logs/). So basically their existing attack detection infrastructure (built without knowledge of this bug) may not have found anything suspicious, but it appears that that's the extent of what they can say about the last few months. They can claim that they found no evidence within the last week (one hopes that they stopped discarding logs when they found out about the attack), but if they want to convince us that they know this wasn't being exploited as late as two weeks ago, they need to provide specific evidence.
Listing all CloudFlare proxied sites is exactly the right thing to do. Everyone seems to have been in the scope of the bug, and CF doesn't seem to have any good way of identifying affected customers.
I wouldn't be surprised if people receiving this took no action.
I think not every "leak" is sensitive, but there are definitely instances Cf and Google both found very sensitive information.
Where do you have that info from?
If you think this is implausible, consider just one persona who could do this;
- Someone turns clouflare https service on their website
- They check their pages and see some random data in the middle of a <p> tag
- They reproduce the bug. Then they reproduce it again. Then they script it.
Are you confident that Tavis was the first to discover it?
The email we received was a joke, OK great our domains 'weren't affected' in the sense of memory dumps weren't being injected into our HTML, and luckily we only proxy static images/html through CF so at worst a visitor's google analytics cookie could have been leaked, but on a personal level any person who has used any CF-proxied website (e.g. Uber) in the past few months is potentially affected.
Whether or not you think it's likely anyone discovered this earlier, the fact remains that private data is still in various public and private caches around the world. It's a monumental cock-up that will require every CF proxy customer to rotate keys, invalidate tokens and force mass password resets to ensure complete peace of mind for millions of consumers who will probably never hear about this issue even though their credit card information, passwords and private messages could be floating around the internet as part of a cached version of a website they've never even visited.
Even the way you're looking for cached data to find affected customers - yeah ok, for page x.com/y you found data for customer z.com, but what about the other million times that affected x.com/y page was loaded, that could be data from a million different customers that someone else (human or otherwise) saw, whether they realised what it was or not. And trust me there are more than a few people on the planet who would know _exactly_ what they were seeing.
Forget about shareholder value for a minute, please, because it's an absolutely fatal mistake for your company to downplay an issue like this.
Were the 2 things running on the same process? If they were not, there's no way that the buffer overrun could read an other process memory, right? it would have failed with a segfault type of error.
If so, shouldn't Cloudfare consider running the sensitive stuff on a different process, so that no matter how buggy their caching engine is, it would never inadvertently read sensitive information?
Private keys are not exposed because they are not stored locally (they use the Lua module to implement this).
Of course, AFAIK.
I question Pirates (https://github.com/pirate) motives for even doing this? Karma? Reputation?
So even a full memory dump of what's transported in TLS should, as long as it's properly implemented, only reveal an SRP authentication session and subsequently symmetrically encrypted data.
(And inside that SRP-negotiated encryption should only be more symmetrically encrypted vault items, and RSA-encrypted vault keys. If properly implemented even complete TLS breaks do not break 1Password at all, even the cloud version. Properly implemented being the key words of course.)
My confidence in that has dropped slightly in the past day.
But if the encryption are supposed to protect separate and independent OSI layers or operation steps, then it would seem to me it's fully valid - specifically, in this case:
- TLS dissolves inside HTTP-HTTPd endpoints or any reverse proxies, if used
- additional SRP-negotiated AES dissolves inside client to the process doing key handling
- final "actual" key encryption, handling at-rest encryption, and to make sure it's zero-knowledge to the storage handler
They seem to me to be guarding information leakage against very different parts of the key mangement process (storage, manipulation, http transport), and that doesn't seem to be snake oil to me.
I take it to be insurance against a future vulnerablility discovered in one of the algorithms. For some scenarios it seems like the cost can be worth it.
On the other hand, nested hashing has always seemed counter productive as it seems plausible that nesting hash functions can decrease the randomness of the image.
I wonder which password manager the original Project Zero thread referred to then if not 1Password.
Not to my understanding. 1password uses client-side encryption, using keys generated from your master password. This means that any data transmitted over the wire is already encrypted, whether over SSL or not.
Most other sites do not do this, at all, in any way. If you use a website that use'd CloudFlare's SSL termination, change your passwords, cancel your credit card (if you sent it to that site in the past few months, eg Uber/Lyft).
> go change all your passwords.
Yes, correct =].
/* generated code */
if ( ++p == pe )
"The root cause of the bug was that reaching the end of a buffer was checked using the equality operator and a pointer was able to step past the end of the buffer. This is known as a buffer overrun. Had the check been done using >= instead of == jumping over the buffer end would have been caught."
"2017-02-18 0011 Tweet from Tavis Ormandy asking for Cloudflare contact information
2017-02-18 0032 Cloudflare receives details of bug from Google
2017-02-18 0040 Cross functional team assembles in San Francisco
2017-02-18 0119 Email Obfuscation disabled worldwide
2017-02-18 0122 London team joins
2017-02-18 0424 Automatic HTTPS Rewrites disabled worldwide
2017-02-18 0722 Patch implementing kill switch for cf-html parser deployed worldwide
2017-02-20 2159 SAFE_CHAR fix deployed globally
2017-02-21 1803 Automatic HTTPS Rewrites, Server-Side
Excludes and Email Obfuscation re-enabled worldwide"
Seems like a pretty good response by cloudflare to me.
One interesting thing: the raw dump that's linked from the list's README doesn't seem to include a couple of notable domains from the README itself, like news.ycombinator.com or reddit.com. I may be mangling the dump or incorrectly downloading it in some way.
EDIT: disclaimer, be responsible, audit how the dump is generated, etc etc etc
Sorry for the index.html, trying to figure out how to get index file to work on cloudfront.
You can also run the python script on the website anonymously on your computer to dig sites out of your email, which is a good indicator that you have an account with them.
I hope 1Password's Watchtower service will soon give hints.
I am combing through site lists trying to find sites that are impacted. However, I am only updating based on sites that have suggested that they be updated. Very few are suggesting password updates right now. But, there are a few that are in Watchtower now.
Lists like the one this link to are useful, but they don't actually say if a site was indeed impacted. In the end we may just treat this like Heartbleed and suggest they all get updated but for now, we're trying to only suggest it where necessary.
If you're aware of any that have suggested password changes please let me know and I'll get them added immediately :)
I figure I’ll slowly work through the rest in the coming weeks as time allows.
Anyway, I'm OK with them being on this list, as I believe understanding the scope of the problem is important to figuring out how we prevent these kinda problems in the future.. (For example, answering this question requires understanding who uses CloudFlare: Why are so many sites concentrated on a single infrastructure?)
Welp, time to change all my passwords.
> Welp, time to start using a password manager.
Which is pretty strange in itself, to trust a 3rd party with your internet banking password, but that's how it works.
For example Bank of America won't hold customers liable for fraudulent transfers or bill pay transactions through their website, but sharing your online ID and password seems to void that protection.
I already use Lastpass, which makes regenerating all my passwords a little easier.
That (most likely) would of used oauth. So instead of sending your FB password to the site to log you into FB with. You give your FB password (if your not signed in) to FB and then facebook give the site using "sign in with Facebook" a token they can use with facebook to get account info / do actions on your FB account.
Now depending on which "sign in with" system you used then often the code handed back to the site (via a callback URL handled by the client) is a single use code. So once the site using "sign in with" has used the code with FB they get another set of tokens they will use with Facebook directly.
After the initial "sign in with" process the Facebook tokens are most likely never handed to clients (because they often need to be mixed with a site secret during requests to the likes of Facebook).
So you _should_ be ok if you used a decent "sign in with" system like facebooks as the only thing that would of been handed back to the client and then sent from the client to the site is that single use code. The communication with the site and facebook would of used an API endpoint.
Now... If you used another sites (not facebook) "sign in with" system and their API is also behind Cloudflare it could well be that some API keys could be in a cache somewhere. If those requests were signed with secrets you should be fine because without the sites secrets to lets say create a hmac signature for the request then while there might be some personally identifiable information in caches somewhere the signatures should of already expired meaning that they can't be used in say a replay attack and the data cached can't be used to create fresh requests.
BUT this all depends on everyone doing things right, which may not be the case. But either way, oauth tokens are often not revoked when you change your password. I.e. you might change your FB password and then still be able to auto login on somesmallsite.org because the tokens shared between FB and somesmallsite.whatever haven't changed.
> When the parser was used in combination with three Cloudflare features—e-mail obfuscation, server-side excludes, and Automatic HTTPS Rewrites—it caused Cloudflare edge servers to leak pseudo random memory contents into certain HTTP responses.
I think even CF struggled to find all affected sites - which is proven by the amount of stuff still in google cache, after 7 days of purging. Unless they keep three months of logs listing all sites that used each and every proxy, you cannot be 100% certain of which traffic was affected.
In this case, HN , IIRC, does not use the proxy.
Checking the certs, CloudFlare reissue using DigiCert, I think, whereas HN is using a Comodo cert.
Hacker News does hit the CF proxy and was affected.
$ host news.ycombinator.com
news.ycombinator.com is an alias for news.ycombinator.com.cdn.cloudflare.net.
news.ycombinator.com.cdn.cloudflare.net has address 184.108.40.206
news.ycombinator.com.cdn.cloudflare.net has address 220.127.116.11
> Hi [Username],
> A bug was recently discovered with Cloudflare, which Glidera and many other websites use for DoS protection and other services. Due to the nature of the bug, we recommend as a precaution that you change your Glidera security credentials:
> Change your password
> Change your two-factor authentication
> You should similarly change your security credentials for other websites that use Cloudflare (see the link below for a list of possibly affected sites). If you are using the same password for multiple sites, you should change this immediately so that you have a unique password for each site. And you should enable two-factor authentication for every site that supports it.
> The Cloudflare bug has now been fixed, but it caused sensitive data like passwords to be leaked during a very small percentage of HTTP requests. The peak period of leakage is thought to have occurred between Feb 13 and Feb 18 when about 0.00003% of HTTP requests were affected. Although the rate of leakage was low, the information that might have been leaked could be very sensitive, so it’s important that you take appropriate precautions to protect yourself.
> The actual leaks are thought to have only started about 6 months ago, so two-factor authentication generated before that time are probably safe, but we recommend changing them anyway because the vulnerability potentially existed for years.
> Please note that this bug does NOT mean that Glidera itself has been hacked or breached, but since individual security credentials may have been leaked some individual accounts could be vulnerable and everyone should change their credentials as a safeguard.
> Here are some links for further reading on the Cloudflare bug:
> TechCrunch article: https://techcrunch.com/2017/02/23/major-cloudflare-bug-leake...
> List of sites possibly affected by the bug: https://github.com/pirate/sites-using-cloudflare/blob/master...
> If you have any questions or concerns in response to this email, please contact support at: email@example.com
I think it's a flaw of TOTP though. The client secret should be client generated and should never leave the device.
Transmitting the key over a 'secondary' channel would have protected people here.
It begs the question of whether or not TOTP is really 2FA if it is setup using a single channel of communication.
TOTP works by having (as you said) a shared secret key, and both sides calculate an HMAC of the secret along with the timestamp, and then just modulo the resulting HMAC by 10^6 (usually, for TOTPs with six digits).
Your google authenticator (or whatever) app does this HMAC on the shared secret + time, you type in six digits, and the other side does the same HMAC and verifies that they're the same. If they're UX-conscious, they might decide to HMAC the previous and next 30 second periods as well and compare those too, to account for small amounts of time skew.
Any non-trivial auth token comes with device or host fingerprinting. That's enough to stop this attack scenario in most cases.
For instance, although I have never been to China, I once got a notification from Facebook that someone attempted a password reset on my account from China. This was shortly after the publication of LinkedIn's stolen database of users which affected millions of users including my account.
On a more advanced level, there is a two step process, you authenticate as usual with your password and get a token, then the site will authenticate your device.
The device fingerprinting is totally transparent, it saves and checks some characteristics from your computer, and ensure you come from the same device next time.
For instance, on Facebook you can see a list of known device somewhere. When you connect on a new computer it sends you an email "connected from a new computer is that you?".
Here's a list of alternatives someone asked a month ago:
With 2FA, your password and a one-time code were leaked and cached somewhere, but in order to log in as you today, an intruder would need to know a new code. And they wouldn't, unless you happened to set up your time-based one-time password (TOTP, e.g. Google Authenticator) during this timeframe as almost_usual mentioned. The reason here is because it's possible the secret key was leaked, so now someone can generate the same numbers your app is generating.
xuki mentions that 2FA doesn't protect you against stolen bearer tokens, which is another issue. The usefulness of a stolen token depends if it's expired or not. If you haven't, force a signout of all your sessions to invalidate old tokens (and change your passwords).
The three features implicated were rolled out as follows.
The earliest date memory could have leaked is 2016-09-22.
2016-09-22 Automatic HTTP Rewrites enabled
2017-01-30 Server-Side Excludes migrated to new parser
2017-02-13 Email Obfuscation partially migrated to new parser
2017-02-18 Google reports problem to Cloudflare and leak is stopped
The greatest potential impact occurred for four days starting
on February 13 because Automatic HTTP Rewrites wasn’t widely
used and Server-Side Excludes only activate for malicious
this is despite (or maybe because) of my best efforts to secure systems as a major part of my job.
> This list contains all domains that use cloudflare DNS, not just the cloudflare proxy (the affected service that leaked data).
Sites using Cloudflare, really. However, Cloudflare say that only sites using three page rules were affected - email obfuscation, Server-side Excludes and Automatic HTTPS Rewrites. 
Is this over-estimating the impact, perhaps?
I assume there's a separate email for sites where they happened to find Google cache data, but...
1. a request hits a site that doesn't use any of those features, but loads juicy data into memory temporarily; the memory is dealloc'd, but is now "primed"
2. a request hits a site that uses those features, triggers the bug, and leaks the data from step #1.
Said differently, my reading of the CF blog is that only sites using those three page rules trigger the bug, but that is distinct from being affected by it. (The affected site is the one in the uninitialized memory; the site using the rules is in the initialized memory being processed.)
But only requests to sites using the features you mention, will have leaked data.