Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What's the actual fallout been from the Cloudflare bug?
248 points by anon456 on March 5, 2017 | hide | past | favorite | 75 comments
So I understand what happened with the Cloudflare bug, that https POST request content was leaked into HTML documents on the same or other servers and some of it was cached by search engines or malicious foreign powers. Whenever something like this happens the HN community whips up into a frenzy with people coming out of the woodwork that appear to be experts saying that "this is the end" and "this is so bad, we're f*cked".

Meanwhile - none of my friends in the "real world" (outside the HN bubble) seem to be affected by this at all. I have a client that's a Cloudflare customer and they got an email saying they just weren't affected. And I haven't seen any huge leaks or items in the press about some terrible hack or theft that has brought someone or a corporate "down".

Should we always take news like this with a grain of salt? When can we tell when an attack like this is a fundamental undermining of the entire internet infrastructure, an attack that will cripple a few major companies, or just an issue that revealed some data but was mostly just overblown? Would love to hear some opinions!




The trouble with this sort of bug is that we'll likely never know.

Some people's accounts will be compromised, and nobody will know if it's been due to fishing, insecure passwords, or an information leak such as the Cloudflare bug, or an undisclosed or undiscovered breach somewhere.

The more responsible Cloudflare customers have invalidated existing sessions; that's much less hassle than forcing a password reset, and since session tokens are transmitted in every request, a leaked token is much more likely than a leaked password.


> nobody will know if it's been due to fishing, insecure passwords, or an information leak such as the Cloudflare bug, or an undisclosed or undiscovered breach somewhere

"Not measurable over the background noise" is a pretty workable definition of "no fallout".


The problem is that you can't magically measure it separately from the background noise. It becomes part of the background noise.

Lacking tools to measure an effect doesn't mean it has no effect.


An information leak is an information leak : we still fail to realise that it's something that's happening daily. There's no drama in it.

Criminals are taking advantage of opportunities like this every day, still no one cares too much about it (HN bubble & friends excluded).

Things like this may have a strong impact or not in the press/popularity circus, but in this particular case it seems they promptly monitored the situation (thanks to their competent staff).

What most surprises me is that their highly competent staff is thoughtlessly violating one of the security principles in sw : SECURITY BY ISOLATION .

No one (no matter how able you are) can write absolutely bug-free algorithms : even when dealing with formal verified software you can still attack the assumptions.

Security by correctness is a laudable effort, but computing customers data with a single process is not sane. I'm aware they're doing this for performance reasons, but a well implemented isolation layer would have prevented this (even while dealing with a bug like that).

Their architecture is vulnerable.


> What most surprises me is that their highly competent staff is thoughtlessly violating one of the security principles in sw : SECURITY BY ISOLATION .

I don't think this is really true, but I'm open to hearing your thoughts on this. There was a bug in their HTML parser which caused unrelated memory to be dumped to the process. Their SSL termination servers were isolated elsewhere which is why SSL keys weren't dumped into public caches.

Where would you like them to draw the isolation boundary? Per function? Per rule? Per service? From what I understand, these processes were a part of a single service, but not every request was using each type of rule.

Even if they'd only had customers using their html parser isolated on separate servers, other customers would have been affected even if their HTML was perfectly valid according to the parser.


I think the implication is that the isolation should be per customer, each being allocated their own parsing process, isolated from the other customers.

That's roughly what we do, though we run an hosted version of an open source webapp, not a CDN. It's more expensive resource-wise (particularly RAM), but it has meant that we were immune to 90%+ of the security bugs discovered in the platform.


Sure, that's a valid question to ask. But imagine you have 1,000,000 customers. Now you have to calculate and manage scaling groups for 1,000,000 customers * number of services. The resourcing costs alone would be outlandish, not to mention trying to independently scale each customer. Perhaps container systems would make this easier, but do they have better memory isolation? Is it possible for a container process to overrun into another containers memory without an exploit in the container system?


A container is a process with some extra isolation (namespaces), they certainly can't overrun into each other without an exploit.

Why would the costs be outlandish? We offer that and we're fairly cheap. Since the cost is mostly fixed per customer, it should scale linearly.

As for scaling, they already have to do that, by pointing different requests at different servers depending on their load, etc.


Assuming for a minute that containers aren't in play, then the isolation model becomes that of a server/vm with the associated overhead of each. To make this easier we'll assume there's only a single service, even though we know this to be untrue.

If there are 1M customers that's a minimum of 1M servers. Some customers are obviously larger and would need more. There's also HA. Let's conservatively call it 2.5M servers.

At an absolute bare minimum we'd need to allocate 2.5M GB of ram and 2.5M vCPU. That's a huge amount of resources.

If you could reliably fit 10000 Small customers on a single server at 32gb ram and 8 cpus you can already start to see how many resources can be saved.

Without customer isolation you've got the entire cluster to handle load spikes and HA. With isolation you have to have scheduling monitoring each of the 1M clusters and scaling appropriately by anticipating demand.

Scaling a service is way easier than scaling customers within a service or many services.


Assuming for a minute that containers aren't in play, then the isolation model becomes that of a server/vm with the associated overhead of each.

Why? There's nothing magical about a container, it's literally just a cgroup of Linux processes. You don't have to use them to get the memory isolation we're talking about - uncontained processes get it too.

That's what we do: one process per client, uncontained, just running on a different system user.

But in any case, sure, use containers, I'm certainly not opposed to them.


That really isn't practical given the number of datacentres CF are in * the number of free customers they have.

Perhaps for some tier of paid customer.


There are many ways to ensure a system is fault-tolerant, scalable and still reasonably safe.

They certainly have the resources to solve this if only they want.

All I can say is that I am not willing to pay for something so fragile, but that is only my own opinion.

In the real world things may go differently: they exposed critical data back in 2012 too and they're still here ...


Linux namespaces/containers create a memory page table completely separate from the host's so barring vulnerabilities in the container implementation that allow mapping host physical memory to guest virtual, isolation is strictly enforced by the memory controller in the hardware. Without an exploit, the worst case scenario is leaking shared library read-only sections across containers (since the physical memory might be shared for a smaller container footprint, although i don't know if LXC supports that yet).


> Linux namespaces/containers create a memory page table completely separate from the host's

Each _process_ has its own memory page table. Containers are built out of processes, so they inherit this attribute.

Namespaces have nothing to do with it.


Sorry, I should have elaborated: with namespaces, each container instance gets its own process table with separate non-shareable pages (without KSM or other dedup feature) and then each container process gets its own page tables, like they normally do. The point is that there's an extra level of isolation beyond just processes, although there is still the kernel attack surface.


> each being allocated their own parsing process

That just punts the vulnerable code elsewhere. A kernel bug could leak memory across processes. And the kernel is also written in C, so you aren't getting protection from a "better" language either.


That's assuming that the likelihood of such a bug in the kernel code is the same as a bug in an HTML parser. And also that this bug would go unnoticed for months. The fact that both are written in C doesn't make them equal.


absolutely true


Thing is, with the single model process, a kernel bug could also leak memory between customers. And in fact, it's much more likely, since for it they're all in the same security context. So it's not punting the code, it's reducing the attack surface.


> And in fact, it's much more likely, since for it they're all in the same security context.

I disagree. I think it's much less likely, because the kernel doesn't usually get involved in the process' memory once it is allocated.

Sure, it maps the virtual memory about to physical memory as needed, but bugs there is likely to cause severe corruption resulting in an immediate crash. The kernel doesn't go low level enough for it to be likely to result in messing with a process such that the single process continues to work but also leaks across the process-internal customer boundary that the kernel cannot see. That would require a level of surgical precision I don't think is likely in a bug.

To be clear, I'm not saying that you shouldn't use per-customer processes. The kernel has more eyes and is less likely to be vulnerable in this way. Just that from an analytical perspective, you are really just moving the problem elsewhere, rather than solving it.


I read their post "incident report" too . Isolation is claimed, but if I understand correctly, HTTP handling is shared between customers. Am I wrong ?

Suppose you're a "bad actor", knowing this is a shared service, wouldn't you look for 0 days in it ? A carefully crafted exploit has the potential to leak specific content from unaware customers again.

The attack surface is nginx (http://nginx.org/en/security_advisories.html) plus each component of each loaded module ...

It would be saner to apply isolation to each element of the cartesian product between customers and services.

The performance (and cost) impact can be mitigated by scheduling resources over a pool of disposable virtual machines (obviously in xen and with iommu protection), but I bet they can develop even better solutions.


Even if there were a set of VMs per customer (and all the scaling per customer overhead that goes along with that), a carefully crafted exploit would still reveal details for that customer. Then it'd be a matter of enumerating all of the customers you were interested in exploiting, which would make it easier to get data for a specific target.

The operational overhead of VM/Container isolation for the cartesian product of customer + service sounds like it'd be extremely prohibitive. It's certainly a tradeoff, but to claim it's saner is missing all of the other costs associated with such a system.


Yeah it makes a targeted attack easier. But it prevents attacks across different customers. Tradeoffs ...

Maintaining a running pool of VMs per service in a sufficient number to serve the load of requests grouped by customer, and assigning the VM to a specific customer only at needs is different than running permanently a pool of (n).customers x (m).services VMs.

This is why an efficient scheduler and the usage of disposable VMs is a need. Still depending on the load and the variety of the traffic it may not be feasible, you are absolutely right !

Another approach to ensure isolation is the usage of a MAC framework. As I wrote "I bet they can develop even better solutions" ;)


The point is that if they had isolated their customers only those customers using that particular feature would have been affected. Now potentially all customers have been affected.


It's not much to go on, but it seems they are looking into better isolation going forward: https://disqus.com/home/discussion/cloudflare/incident_repor...


We'll be sure to write up details of things we have changed. It's ongoing.


As I understand it -- somebody please correct me if I have this wrong -- the thing about Cloudbleed is that there isn't necessarily any relationship between the site whose page is cached and the site whose credentials appear in that cached page. So the only way to know that a particular site didn't have credentials leaked is to search all the caches of all the search engines on the Internet.

So, as perlgeek says, we'll probably never know specifically what the impact was.


Plus all the non-search engine caches, plus the computers of the zero or more people that made requests knowing these sorts of leaks were happening but didn't say anything.

The stuff that got cached was just the persistent vulnerability, there's no way to know how many people noticed the issue taking place in the direct requests they were making.


Or if they're not a Cloudflare customer.


Or if they have never been visited by some webproxy Cloudflare customer.


I think Google freaked out to the Nth degree because it's quite likely that cached data is probably stored in a system that doesn't have [m]any security restrictions attached to it, and... well, there are >72k people. You're going to find well-meaning "what happens if I... OOPS" types (for any definition of "OOPS"), along with (ostensibly equally well-meaning) "hey, an OAuth token that actually works! Let's see just how far we can take this..." people... and then some that aren't just interested in fun engineering challenges, if you get what I mean.

I have no idea what Google employees have access to. I've always wondered whether they can hand-code their own MapReduce syntax over Google's actual Web index (I could find SO MANY THINGS if that were possible!). I wouldn't be surprised if the cache data <-> index were accessible to everyone who's been around for >6 months, so they can tinker with it.

But I guess the only reason I'm able to type this is that I haven't signed The Large Book Of NDAs (I presume it's large).


First NDA: Don't talk about NDAs

Second NDA: If it's not published on a Google domain, you have to make as abstract as it gets


A couple of relevant things I just thought of:

- http://www.goldsborough.me/google/internship/2016/11/18/01-5... - has some interesting tidbits; for example, I would not mind a visit to the Google Store :)

- https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea... - The last paragraph in this recent AMD AMA was a real "OH I get it now" eyeopener for me about NDAs and not announcing stuff AOT; tangentially related


> I've always wondered whether they can hand-code their own MapReduce syntax over Google's actual Web index

Indexes, and yes, in a few ways. Some even have nice frontends. You actually have access to one of those.


Thankyou so much :)

I know I've wondered a lot about this subject for a few years, but I can't remember anything at this exact moment (just got home from being out). If you feel like poking me (contact info in profile) from some sort of anonymous email, that would be awesome, I could get back to you.


For a remarkably level-headed take on the fallout, I recommend listening to the latest episode of Risky Business [0]. The interview with Troy Hunt gives a calm, informed and above all well reasoned baseline for response.

0: https://risky.biz/RB445


I think part of the disconnect is that this issue is a big deal for tech professionals, but barely noticeable for everyone else.

By the nature of the bug, the likelihood of any particular individual having any meaningful exploitable information exposed to somebody in a position to exploit it is astronomically low. So most ordinary people are ignoring it, and justifiably so.

If you're responsible for security for a site that sends traffic through CloudFare, then it's a very big deal for you. You'd better be quick on the trigger to see and react to this stuff, and you'll have to mass-reset sessions at the very least, and possibly reconsider whether you really want to be terminating SSL at CloudFare. Exactly because, while not much has probably been exposed, you will never be able to be sure what was exposed to anyone from random hackers to the whole world, via search engine caches. So a broad reaction is justified.

And of course people who like tech but aren't actually responsible for any sites being served through CloudFare tend to react the most. Even though it's not a big deal if you're already doing all of the standard security precautions, like different passwords everywhere and 2-factor authentication on anything important.


Old saying don't believe everything you hear, and only believe half what you read. Most news and blog outlets are horrible for information. Either they redigest someone else information, spin it to be more interesting, or just jump on the hype train. Their are few blogs and news outlets that actually have experts worth listening to. When the news/blog outlets heard whatsapp had a technique flaw by a security expert they jump on it like a fresh piece of meat. Whatsapp didn't have any security flaws or implementation flaws, but that didn't matter. The news/blogs didn't even understand it they were just jacked for some revenue from ads for this fresh piece of meat. It did so much harm to the people that use it for security reasons and to the company. But that didn't matter it is all about getting you to download a piece of javascript to tell some company you may have notice there ad that was place on someones website. News is dead. Bloggers only care about traffic. Experts are either paid to be used and abused or don't have a big enough audience to be heard.


I would welcome if this incident shone light on possible Cloudflare alternatives. For example, it should be technically doable for DDoS protection service to only initially verify user is not a bot, and then merely tunnel unchanged SSL traffic directly between server and client. Does anyone do this?


Another model which might work - don't take over customer dns, issue ephemeral tokens ( say 30 mins) for each ip classifying risk. Then the client site determines whether to drop connections, no tunnelling required.


You still need to make a connection to drop it. You can DDOS just by making a lot of connections.


Cloudflare should count its blessings. If not for Google the fall-out would have been a lot larger.

So even if the sky didn't fall that's no reason to pretend this wasn't a big deal.


The way you know it's real is when you call up cloudflare's top customers and ask if they would switch to the competition the answer was a resounding yes. That's how I know it's not based on a HN bubble


I can't parse the tense here to determine if you or others did this already?


> "this is the end" and "this is so bad, we're f*cked"

End of what? It will just give rise to slightly more secure, improved services (maybe be the same providers, maybe by competitors, but definitely financed and implemented by the same people).

> And I haven't seen any huge leaks or items in the press about some terrible hack or theft that has brought someone or a corporate "down".

Look at the Sony/PSN breach; there has been zero accountability, and it has not hurt the PS4 launch at all. Consumers just don't give a shit.


I think you're misinterpreting the comments about the scale of the leak. The risk that a concrete compromise would occur as a result was always pretty small.

The bigger thing was the grandiose scale, the impact on administrators in having to rotate a significant number of credentials, and the hit to CloudFlare's reputation. A bug where you randomly dump random data without regard to its sensitivity or origin (i.e., data from completely unrelated sites could've been included in the dump), and have no way to tell what actually leaked, is the worst kind of privacy bug there is, precisely because it's impossible to triage. No one can ever know everything that actually got out.

CloudFlare is now a major piece of internet infrastructure. It's impossible to know that anything sent through a CloudFlare server between Sept 2016 and Feb 2017 wasn't accidentally publicly leaked, and worse, non-trivial quantities of this data were being accidentally saved permanently in search indexes. Surely some bad actors have saved such results in their own private indexes as well.

When CloudFlare says "your site was probably unaffected", they're making a guess, because they have no way to actually tell. They're just assuming that based on the volume of requests your CloudFlare endpoint receives and the volume of requests made to endpoints that exhibited this bug, content from your site probably didn't get out. But there's no way to know.

If we take that seriously, it requires us to consider everything that went through a CloudFlare server as potentially publicized and preserved in the public record (including usually-transparent unique identifiers like session cookies/tokens). We then have to assume that an adversary obtained any and all such data, and respond as best as we can to preclude the possibility of that adversary exploiting the leaked secrets to harm our and/or our company's interests.

Of course, the flip side of the sheer scale of this, and the fact that the bug was relatively rare and that there was no way to control what content it dumped, is that it's very unlikely any of your data specifically actually got leaked.

If you and/or your company are OK with crossing your fingers and hoping this won't affect you, there is probably a 99.something-something-something% chance you'd be right. Most people have responded by resetting tokens/passwords for anything that uses CloudFlare, since that's relatively low-impact and most people were probably overdue for a credential recycle anyway, and have left it at that.

This does clearly illustrate that the internet has a few de-facto junction points, which would be very high-value for an attacker. That's worth keeping in mind.


Aside from being a black eye on Cloudflare, I don't see this issue being of much consequence. I have yet to see one real-world example of a screenshot or link to a cache of data of leaked data (sensitive or not). If anyone has an example, please share. As others have mentioned, the real fear is of what could have leaked, not what did leak.


There were a couple of examples of leaked data. Session stuff, API keys, cookies, oauth tokens, and so forth.

Uber: http://securityaffairs.co/wordpress/wp-content/uploads/2017/...

Fitbit: http://cdn.iphoneincanada.ca/wp-content/uploads/2017/02/clou...

OkCupid: https://trtpost-wpengine.netdna-ssl.com/files/2017/02/cloudb...

Oauth data: https://pbs.twimg.com/media/C5ZCRtMVMAEs0ca.png

Or were you asking about some consolidated treasure trove?

The real risk, to me, is that someone noticed this before Tavis did. They could have created a site with the right parameters and then scraped it for weeks. Cloudflare only had logs for 10 days of the multi-month exposure window, so they have no idea if someone did this or not.


I deleted my subscription and account with 23andme. I have a few friends and colleagues who acted similarly with other sites.


Why delete your account and why 23andme specifically?


Why not just change your password?


My guess: For criminals, the cost of finding the needle in the haystack is just not worth it - it's easier to phish fresh credentials than to hope that you'll find some in some hard-to-crawl archived data set. So we won't see anything there.

Realistically, this will probably only be exploited by intelligence agencies who have the means of collecting all the data and motivation to do so, and maybe not even them (because they have better ways too). If they do exploit it, the nature of intelligence agencies, of course, means that you typically won't notice any direct impact.

The reason why this caused such a big panic is that while the likelihood of your password being compromised is small, it could have hit anything, and by conventional wisdom, any password/key that _may_ have been exposed, even if the likelihood is small, needs to be considered compromised. Hence, "OMG everything is compromised".

Another reason was probably that it was a really scary wake-up call demonstrating the risks of centralized services. Cloudflare is a Single Point of Failure for a lot of security, but that is easy to push aside until you see it failing.

Realistically (and I'm going to get a lot of flak for saying this) the correct way to handle it is to rotate extremely high-value credentials (think Bitcoin exchangs, administrative access to major services, ...), reset sessions if you're hosting your website on Cloudflare (since session tokens are much more likely to leak than passwords, and the cost of forcing users to re-auth is small especially if your sessions expire regularly anyways), and then call it a day.

In particular, keep in mind that for high-value services, you're hopefully already using 2FA, so even if an attacker did get your password through this, they probably don't have your 2FA token (although Kraken, a Bitcoin exchange, pointed out to their customers that they should re-setup 2FA if originally set up during the vulnerable timeframe, since the key used to derive the 2FA could be compromised).


On the non technical side - A LOT of multiple people's time inside our small NFP org inspecting logs, rolling passwords, keys etc, expiring sessions and communicating with clients.


Sometimes those of us who live in the pure, mathematical world of software forget that the real world is more resillient than that.

People's passwords, identities, and bank and credit card details will have been leaked. Identity theft and other fraud will happen as a result of this. But we have systems in place for dealing with it, and ultimately life will go on. I've had fraudulent charges on my bank account; it was a serious inconvenience at the time, but it wasn't life-changingly bad.


This won't necessarily be a popular opinion, but I remember when everyday there were negative blog posts and stories about Apple and the new MacBook Pros. It seemed like every developer got on their Medium and wrote a blistering post. It you just read the internet and HN you'd think the world at Apple was crumbling down.

Yet since $AAPL released the new MacBook Pro (Oct 27th '16), their stock is up 24%, with a breakout record Q1. Let's not forget that the entire market has been in an epic bull run since Trump took office, so perhaps that is a factor.

Source ($AAPL vs Dow Jones and S&P since Oct 27th): https://www.google.com/finance?chdnp=0&chdd=0&chds=1&chdv=0&...

Don't believe what you see on HN all the time. People here are incredibly intelligent for the most part, but there is frankly lots of disconnect from reality. In my opinion lots of conspiracy theorists, purest, and some social justice warriors pushing agendas.

My opinion... But I think we can bundle GitLab, CloudFlare, and Uber into categories of will be just fine.


If a movie reviews poorly but does well at a box office. Were the critics/reviewers necessarily wrong? You think people who gave Batman v Superman a bad review thought it still wouldn't be a huge box office draw?

I doubt anyone who wrote those blog posts or comments on HN believes that this product would cause Apple to go belly up. Apple is literally one of the biggest companies in the world by any measure. Having a mediocre product is not going to tank Apple immediately. Being more and more complacent on their part will cause that.

To see Apple's future just look at what happened to Microsoft in the early 2000s. They got complacent, but they still had that sweet, sweet, Office money right?!

Not liking the latest Apple product, and then showing their stock price as proof those people are "disconnected from reality" seems a bit of an overreach...

Nothing bad will happen to Cloudflare (I'm not sure if something "bad" should even happen...?) because no one knows what a Cloudflare is, and even if people's accounts start getting hacked, how can anyone conclusively determine that it was through this Cloudflare bug?


The MacBook Pro doesn't move APPL stock, it is a tiny portion of their sales, the entire Mac business is around 10% of revenue.


I'm aware of that. I'm just pointing out that HN sentiment is not always reality.


I'll add a large part of the MBP discourse was people saying this was an indicator of poor performance across the board for Apple, not just in laptops.


Didn't AAPL's earnings only hit 'record' levels because they added a week to the quarter?

http://www.theverge.com/2017/2/1/14468090/apple-q4-2016-earn...

Otherwise it would have been a down quarter, no?


The verge? I'd say probably want to get your financial news from Wall Street Journal, CNBC, or established trusted sources.



Two words: Carl Icahn.


namecheap use cloudflare, and didn't email their customers (including me) telling them that they may want to change their password...

I have now transferred every single one of my domains away from namecheap

I also installed the following extension, and now watch what I put into cloudflare pages: https://chrome.google.com/webstore/detail/claire/fgbpcgddpmj...


What do you mean by "Namecheap uses Cloudflare"? The authoritative DNS for namecheap.com is Verisign.


What kind of car do you drive?


Netki sent me an email that they think they might be affected, so they strongly recommend to change password.


Well for me, the fallout has been the pain in the ass task of changing all my passwords.


Does anyone here (startup) use ddos protection ?


I run a few medium sized side projects (x-xxx million pv/month). I can never use a per-GB-cost CDN solution, and all of my sites require protection up to L7.


Your response doesn't make much sense to me, sorry. Why can you never use a per-GB-cost CDN solution? Is it because you deliver your content via SSL and don't want to share your SSL keys? Is it because you can't afford the potential CDN delivery costs? Is it because your projects don't benefit from CDN caching? Also, I am not sure what point you are trying to make about layer 7 protection. If you need layer 7 protection, get a WAF.


I need DDoS protection because they get hit nearly daily (esports/gaming related; this is par for the course in this vertical).

I can't pay per GB because of the DDoS protection portion - the last time I used AWS/Cloudfront the skids just wget looped on a hundred thousand threads in the most expensive per-GB region. Cloudflare is basically the only "CDN" that I can feasibly use for even just images. I'm happy to pay, but unmetered only.

I deliver via SSL but there is close to no PII, I don't mind that much about the MITM factor.

Lower layer protection against volumetric floods is needed (standard attacks, all UDP needs to be dropped as high up as possible), but also L7 protection is needed - not vulns/XSS/SQLi/etc, but we're talking bursts of 10m-20m+ req/s to whatever the most expensive endpoint is (usually search), registration attempts, if any third party APIs are used, the intent is to exhaust as many calls as possible or deny service in the end.

I have a stupid amount of nginx and custom lua rules + redis trying to clean up whatever gets passed through, things like "if IP has shown over 40 different user agents in the last 2 minutes, drop it as high up as possible, ideally before it enters my network" and "if this user agent contains Chrome but the request headers don't accept sdch then this is an flood".

The commercial/for-profit larger sites in this group are behind Akamai and Distil. Both of these cost $comedy.

This level of bullshit is fairly normal for video games.


Hopefully the fallout is that Cloudflare gets its act together.

Even if your friends know 100% that they can't possibly have been negatively affected by tons of private information being dumped all over the internet, I'm not sure how such anecdotal evidence is any more instructive than a HN "bubble".

Even if nobody at all ended up negatively affected in any serious way, I don't see why people shouldn't remark on the potential effects of such a fiasco when it happens. Was anyone really predicting "the end"?


Weird how this post seems to have annoyed some people.


> Should we always take news like this with a grain of salt?

Yes.

Except this fear is part of our income source like the TSA, except they are more like 100% IT is a bit less.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: