Hacker News new | comments | ask | show | jobs | submit login
Cloudflare's new DNS attracting 'gigabits per second' of rubbish (zdnet.com)
357 points by sohkamyung 10 months ago | hide | past | web | favorite | 195 comments

I've seen some of the papers where people look at big chunks of unused address space and watch the probes etc. It is really quite amazing. Once I screwed myself royally by accidentally turning RIP on for the upstream side of my router (connected to the cable modem) and it advertised 192.168/16 which Comcast accepted and started routing random stuff from the local exchange to my router. It was pretty funny talking to their NOC staff who was mad at me for advertising the route but I pointed out it was pretty stupid of them to accept routes like that from their edge nodes. What if I had sent them a BGP route for China, then what would they have done? (Don't try that at home kids unless you don't want to use your network for a while)

As an ex-Comcast employee. Seeing stuff like that happen really doesn't surprise me

When Comcast first rolled out that data cap nation wide, I started prodding at it one night out of morbid curiosity

Turned out that it would silently slurp all HTTP traffic! Once you hit some arbitrary measurement (EG: 50%) it'll immediately start hijacking all HTTP websites you visit and inject a ton of Javascript to put a message over the web page forcing you to acknowledge your cap

Nmapping the server they used caused the messages to immediately disappear. As well as the server to seemingly vanish. Turned out the firewall was just blanket banning the entire IP range when it saw a portscan!

The upside being that Comcast would stop MITM'ing HTTP traffic for about 72 hours

And people wonder why https everywhere is such a necessity now. It should not be necessary to treat your last mile ISP as a hostile entity , but sadly, it often is.

I canceled my decade old COX account last time that happened. Even asked nicely not to "help" by editing traffic, the runaround was fun. At the end they offered to take my ~$90/mo to ~$70; re-confirming they had no idea what I was unhappy about.

A $12/hour call center customer retention worker in rural TN has no idea what you're complaining about, their job is simply to meet some retention metrics on a weekly basis.

Even if you can actually reach the people who run the ASN of your ISPs, if it something big like Cox, charter, Shaw, etc, they'll be politically unable to confirm or deny anything, and won't want to talk to you. You might get a straight answer if you are in a similarly senior position at an equivalent sized ISP that has mutual settlement free peering, such as between RCN and Charter.

Such are the joys of modern "customer support" -- human beings don't scale, because they need to sleep and can only talk to one other human at a time. So you hire the cheapest ones you can find, and instruct them to be minimally helpful. Even better, make them all "managers," so "can I please speak to your manager" will just take you to another minimum-wage employee.

If you want actual customer support these days, your best bet is to create a PR problem for the company via social media, because PR flacks are paid enough to matter.

A smarter, and perhaps even more profitable way to approach this problem is "OK, we want almost everything that happens to customers to be a self-serve situation" and then make your (fewer) call centre employees highly trained troubleshooters who can figure out why _this_ customer wasn't able to self-serve and get that sorted for them. The result is a better customer experience (usually everything just works, when you need to talk to a human the human is an _expert_ who helps fix your problem, not a drone working from a fixed script) and it can be much cheaper if done right.

Definitely if you think of your product/ service as "premium" this is the correct model to have.

Lots of people simply do not like self serve. I get that companies want to cut costs, which is why we got endless telephone menu trees, then support web sites with crappy search, then automated telephone agents, then chat bots, and on and on. And the public has devised all these various tactics to skip that crap. I just want to talk to a damned human being rather than spend 10x the time navigating your poorly thought out Customer Avoidance Systems! Sometimes I need a drone and sometimes I need an expert. I have yet to see an automated system that could successfully determine which tier support I needed.

Often I prefer "self-serve," but sometimes I like to talk to a human who knows about the system that has created my problem, and can quickly fix it. That's why there are websites dedicated to showing people how to skip through phone menus and reach actual humans, even if those humans are soundboard operators paid almost nothing to make you go away.

link? I refuse to talk to computers.

The fact that these pages exist tells you all you need to know about the fundamental user-hostility of modern tech companies:



In cases without competition. I live in an area with three gigabit-capable ISPs and the difference in support quality is unbelievable, even for the Comcast and Verizon customers – just calling from a competitive neighborhood gets your call processed differently.

Which city?

Washington DC - my neighborhood has Comcast, RCN, and Verizon. Our neighbors report better service on all three — and when you go a couple blocks south where the FIOS rollout stopped, regression to normal sets in for Comcast.

We also have municipal fiber but they’ve chosen not to make that available for residential service which is really disappointing but … politics.

So, channel bonded docsis3.0 and 3.1 on Comcast and rcn, and gpon fiber from Verizon? Or are the cable operators also doing singlemode to the house now?

Definitely for RCN. Comcast ads have claimed fiber but I don’t know whether that’s available or just planned since I won’t do business with Comcast.

Sounds like Seattle

Impressively (given the rest of the company), Comcast DNS is on twitter [1] and is fairly responsive/tech savvy and unrestrained. It feels like someone in the department got one over on management - "look, if you get in our way things break real fast. Trust we know what we're doing and you won't get calls about state.gov being blackholed".

1. https://twitter.com/ComcastDNS

At my work, we treat our own internal network as a hostile entity. Defense in depth.

Same here. Devs can't even mount thumbdrives. Which is fine for us, we don't need them, but it prevents some funny business should one of our laptops get stolen.

Ditto. No flash drives except for special ones distributed by the IT department, and even then it takes a few months of reviews to get one.

But we go one further — no development on the LAN. Develop on another server at a different hosting company, and only deploy on the production host when thoroughly vetted.

For some reason, I get this mental image of everybody in a city street dressed in hazmat gear.

> no development on the LAN

Does this just mean staging/shared servers are not deployed on machines inside the corporate network, or that developers can't have their dev environment locally and instead remote in to some other machine to do their work?

I've heard of the latter in a few companies and it's always the kind of thing that makes me nope out of every applying to them.

Devs do most of their work on localhost. Changes are pushed to an independent server on an independent network in another city. Once those changes are vetted, then the work is mirrored to production.

Each dev has his own cellular connection for internet on his development box. Again, no LAN access. Corporate e-mail and cross-department file servers are on another box on each dev's desk.

The downside is massive over-usage charges for each dev's cell data (40-50 GB/month/dev over). The upside is that devs are effectively airgapped from the company, which I assume is the primary goal of all this.

Also, chat is banned. There have been efforts in the past to bring in tools like Slack, but the honchos believe that it makes the company better as a whole if we speak to each other like human beings, especially cross-department. Even if that means slowing down a project. And even if that means having to walk across campus, or occasionally driving to another part of town to another building. Phones are largely only used when off-site, or we need to talk to someone in another city.

It all sounds tremendously inefficient, but I try to think of it as being like working at IBM or Sperry in the 60's. It seems to work. The company is profitable and expanding, and has been for 40+ years.

Oh wow - I would not want to work there ever. I understand the need for some security but this seems just over the top.

Your last mile ISP is almost certainly a hostile entity, but HTTPS alone isn't going to save you [0].

[0]: www.cs.umd.edu/class/fall2017/cmsc818O/papers/tangled-mass.pdf

That paper is from 2014 and some of the circumstances described have since changed.

They've also done some things that I assume fell out of operating ICSI's Notary but don't make any real sense for this paper.

For example: For a real user what we care about is this cert the end user was presented for a site: Would that be trusted in (Internet Explorer on XP, Safari on iOS, a Python script on a Debian machine, etcetera) and would it be trusted in this smartphone.

And what they've looked at is, were the same Trust Stores baked into an Android phone as the above systems? But that's subtly different in a way that fogs the issue here. Example:

Suppose phone X trusts ISRG Root X1, XP trusts DST Root CA X 3, and a Debian system trusts Lets Encrypt Authority X3. Those are, to the naked eye, and this study, three completely different things. But in _practice_ for an end user it'd turn out any of the three work for trusting a vast number of certificates used on the web. Trusting one or another _does_ matter, but this paper isn't about why that is, and doesn't really explain what's going on here, it treats that sort of scenario as anomalous and potentially alarming without explaining.

The paper did remind me that ICSI's Notary won't work with TLS 1.3, which I have sort of known but not ever mentally addressed. The ICSI Notary works by peeking inside TLS sessions. In versions up to TLS 1.2, the server's Certificate is delivered unencrypted, just before both peers encryption switches on and their communications are unintelligible. This is used by the Notary and by lots of crappy middleboxes, but in TLS 1.3 the encryption has switched on earlier, before the certificate is sent, so the Notary can't see certificates any more.

I suppose the only alternatives are to use a VPN (and hope your VPN provider isn't a hostile entity), or to use TOR (not appropriate for all kinds of traffic). This is the world we've made for ourselves...

They still inject that message now. I hit 90% almost monthly.

Wow I’m surprised. That is such a low barrier to doing your own BGP hijackig.

Well, not really. It doesn't mean that Comcast was re-advertising it into BGP and also advertising it to their peers. Usually you use a combination of prefix-list filtering and route tagging so you don't advertise garbage to your peers. RIP and other IGP protocols do not need to have their neighbors explicitly configured, they will find neighbors automatically on enabled interfaces. On some platforms they are enabled on all interfaces by default so it will form neighbor relationships with anything connected. A sane network design would be to only enable it on interfaces connected to your own routers (or customer interface, but on a different process ID) and also use a password.

Any sane bgp neighbor has prefix limit ACLs and prefix filters in place on their edge connections to another peer. As an ISP that announces a lot of space and customer space to 2 upstream transits, every time we take on a new downstream customer that brings their own /24 or bigger, we need to have our upstream transit providers update their prefix-list filters.

If our upstreams were clueless or negligent, it would be possible to get into a situation such as when a Pakistani telecom announced a huge chunk of V4 space that is YouTube, effectively DDoSing their international submarine links and also taking down YouTube for some users worldwide.

Except many of the big upstream providers don't actually filter anything. HE.net for example, doesn't.

We forgot to create some IRR entries and GTT just accepted our prefixes.

There is essentially no security, it's fairly trivial to hijack whatever space you want. (Doing it undetected is more difficult though!)

It's even easier to steal a phone number.

Lots of phone companies still just approve a port if you send them the required paperwork to initiate a port. That means with zero verification from the account holder a number can vanish from your account.

Worse. Some very large carriers don't even look at the supporting documentation (bill, LOA) submitted with port orders unless there's a rejection from the losing carrier and they want to double check the address entered or something. Hijacking numbers is crazy simple. Same for hijacking the SMS functionality of any number in the US (voice traffic remains untouched). In about 10 minutes you can start receiving SMS directed towards any number you want, and also be able to send out texts originating from that number. Anyone who relies on SMS for any type of authentication should stop.

SS7 was designed in an era of huge Monopoly telecoms that all trusted each other. Worked fine. Needs to be burnt to the ground, the ashes stomped around on a bit, and rebuilt with the same level of thought that has gone into the development of TLS1.3 for modern use. Won't happen though due to the sheerly massive installed base of telecom gear worldwide.

Yes, sunk costs mean that while those costs are amortized these technologies will remain in place. Upside is there's plenty of business in the area of plugging the holes in the meantime!

> Anyone who relies on SMS for any type of authentication should stop.

Since the problem is that hijacking numbers is easy, shouldn't that apply to “anyone who relies on telephone numbers”, not just “anyone who relies on SMS”?

SMS isn't the only telephobe-number-based second-factor.

> Anyone who relies on SMS for any type of authentication should stop

Err. That's pretty much every implementation of 2FA around the world.

Why isn't this more well known ?

The beauty of it is cases like Google's. They have this bizarre 2FA security-theater Google Authenticator thing, but then nearly force everyone to have their phonenumber as a "backup device".

Guess what the send you when you forget your 2FA or password? Yep, an SMS. So out the door goes the whole point of 2FA. Your three factors (account name / email address + password + Google Authenticator) have now been reduced to one factor: your email address.

I can rent a mobile tower in Malaysia or some other asian country, advertise your phonenumber as roaming there for about €10/h and start intercepting all your shit. Or just get your telco's inept service dept to forward your number somewhere else.

Lessons here:

1. Even the giants get it wrong. 2. There is no security anywhere in the tech world. Literally everything is broken. Your electronic car locks / starter system, your phone, your internet, everything is horribly horribly horribly broken beyond any imagining, even for hyper-tech savvy people. 3. Remove your phonenumber as a backup device from your google account and never use it as a backup device every again.

Once you add another factor you can remove SMS from your Google account. I’ve done it with all of mine.

Edit: Oh, you said that.

I just removed my SMS from Google auth, thanks! And set up an Authenticator (Azure). I would like to see a world where we start removing SMS (and old passwords) from existing accounts.

This is why the recent NIST guidelines on 2FA explicitly discourage using SMS. (Search for ‘SMS’ in the document: https://pages.nist.gov/800-63-3/sp800-63b.html)

Not nearly as many people know how terrible ss7 is and the lack of security/pki/crypto in old-school traditional telecom. It is also a lot more opaque to learn and has higher barriers to entry, and is a very clique like club of "Telco" people.

PacketCable (VoIP over Cable internet) is even worse

When it came out. If you wanted to "borrow" someone's phone number. All you had to do was clone the MAC address of the VoIP (EMTA) port

If someone called the number. Both you and the victims phones would ring

Same with CDMA. Just had to copy the phone number and ESN. Boom whichever phone was closest to the tower would ring, if both phones wear pinging with near freq then both could receive the same SMS and call, but only one could be on a call at a time. No actual interception.

Things got a bit different with MDN and MIN were different to ESN pair. Calls still came but you couldn't auth or call out for data services.

It's all a bit old now, but look up QPST, QXDM for the past decade and 20 years ago look up Oki900.

For some reason I think CDMA could suffer from crosstalk in certain circumstances, but it was something relatively obscure, like one device had to be above another (like an apartment building), but otherwise in the same/similar coordinates, and then certain code assignments would allow crosstalk to happen. It was weird, because it could be one direction only.

Unfortunately I don't really remember the details, since I worked on the core data network at the time.

As someone who has tried to port about 1,100 phone numbers last year, this couldn't be further from the truth. Getting numbers ported is a GIGANTIC pain in the ass with ANY small detail being wrong will reject the entire batch. Since the slamming problem in the 90s it's been increasingly hard to port phone numbers without every T crossed and every I dotted exactly right.

You have no idea

Even for regular IPv4s you often get upwards of 20-40k SSH probes per day trying common passwords against root. IPv6 largely makes this go away since it's too big to brute force scan.

> Even for regular IPv4s you often get upwards of 20-40k SSH probes per day trying common passwords against root

So...anyone here ever set up a throwaway machine with root ssh enabled with one of those common passwords, so that some of those could get in, so you could see what they actually try to do once they are in?

If so, what did you see?

Yes. And a lot of script kiddies (people who were just using easily acquired scripts to attempt to break into hosts). Looking at the sequence of password hashes you could tell some of these scripts were just fed the 'most popular password' list and were working their way down the list.

Yes, this is a pretty common practice. Take a look at the Honeypot wikipedia page and enjoy the rabbit hole. :)

Yes, but the volume in kbps is tiny.

A German podcaster who has been working on networks for decades once said that he owns a large chunk of public IP addresses in the subnet and it's impossible for him to use it because once he activates it he basically gets a DDOS of misdirected traffic. So many misconfigured networks out there...

Did he say what the volume was in Gbps? Consideirng the market value of a /16 now for residential use DHCP pools, it could be easily leased via LOA to a huge ISP that would get most of the shit traffic via settlement free peering on N x 10/100Gbps ports, distributed between many cities. If it's like 15Gbps of constant junk I could still find a way to make it useful. Probably would need to nullroute the most common /24s like , so you'd lose the equivalent of a /22 to that.

I solidly feel it’s a cop out for an ISP to not filter their traffic to block spoofed IPs. In my eyes, there’s zero legitimate reason that this guy should get flooded, but alas, our industry gets lazier and more careless each year.

There is no spoofing involved there. We're talking about typos of 192.168 (private space) typed as 192.68 which is "real" space. It is not like ISPs are leaking rfc1918 IP space.

Very true that it is a typo, which I had noticed, and thanks for calling out. Still doesn’t invalidate my point though. My theory is that a lot of that traffic is likely coming from spoofed IPs as I doubt there would be substantial sustained legitimate, but improperly directed, traffic. My guess is a lot of it is shoddily written malicious traffic.

Google for “BCP 38”.

This pain has been known for many, many years.

I don’t need to Google it, I’m well aware. Part of why I said what I did.

Also, outbound traffic needs to be expensive, not free like it is now.

There simply shouldn't exist the scenario where a household is unaware that their hijacked toasters has been saturating their upload for months.

It seems unfair, but these are solutions that actually work. Waiting for things to fix themselves clearly isn't. We're quickly entering a reality where everyone will be using Cloudflare instead of just most of us.

It's pretty incredible how far naive decentralization got us, though. Soon we're going to be looking back in awe.

Freakshow \o/

I can understand why this comment was downvoted without any further context, but yes, the podcaster's name is Clemens Schrimpe and he probably mentioned that fact in the German podcast "Freak Show": https://freakshow.fm/

According to the show notes at least in FS117: https://freakshow.fm/fs117-yksi-kaksi

I worked with my ISP, htcinc.net to fix routing to this week -- Not sure how they had their core router mis-configured, but it was dropping the traffic.

Most likely had an old copied-pasted bogon filter in place for a huge chunk of previously unannounced APNIC IP space.

Thank you!

> AT&T Gigapower using on an internal interface on at least one model of router-gateway, the Pace 5268AC

Yup. I can't use because my AT&T router is responding to it.

"Whatever just use! Nobody will ever use that address!"

Because is so hard to type or remember and is totally not a private range which is perfect for the purpose.

My router goes to

Spinal tap?

This is networking, so its 'Spinal Vampire Tap'

Backbone Tap. The cover band by NSA employees. They play in venues, inside other venues, that don't officially exist.

Ohh I see! I tested ping to and the latency was less than 1ms I was surprised, now I see why. Funny :)

FTA: in addition to

Try that as an alternative.

Can't change its address?


Houston has run a study on the traffic being directed towards 1/8 before.


Very interesting how the volume of shit traffic to /24s which were not in the "typical" example/documentation ranges like was much lower. When they announced the whole /8 and plotted the traffic only a few /24 receive huge volumes of shit, while others receive (relatively) little, like 8Mbps.

I used to play a game where each kingdom had an address (kingdom:island) if you where on kingdom one on island one (1:1) you would get attacked all the time no matter how much defense you had. If you landed on 1:1 you where basically doomed.

Utopia, right?

Oh man, I remember this too. A friend of mine ended up leader of a huuuuuge alliance for a number of years back in 2004 or so and a few years ago I had a few sit down meetings with the current owners of the game when I was doing an analytics startup. (AFAIK the game is still running)

I noticed an issue with several public WiFi hotspots after setting as my primary DNS:

The login/"landing" page when connecting to these hotspots would not load. Changing back to fixed the problem.

That means they're intercepting requests to (even if only before login), probably because of its popularity. It's a shame we still have to use these hacks to login; there's a solution for that in RFC7710 (which sends the captive portal information in DHCP), but who knows if and when it'll be adopted by most hotspots.


> That means they're intercepting requests to

No, it means their hotspot uses as internal IP. I've seen this in a bunch of places.

Cisco gear is probably the biggest culprit in my experience.

I thought the way these Wifi hotspots worked was that they intercepted all DNS traffic? How else would they work with legacy systems?

Modern OSes detect these login pages by making a DNS lookup of a known domain, eg. macOS/iOS lookup "captive.apple.com", and if the answer is not in the subnet they know someone is intercepting DNS and show the Wifi login window.

A lot of captive portals use HTTP interception/redirection, returning a 30X status instead of the expected 204 status.

Previously (February 2010);

"As part of an effort to de-bogonise this newly allocated address space, RIPE, in cooperation with APNIC, made some test advertisements to the global BGP table for several prefixes with Specifically, these networks included and Why these networks? Because they contain the novel (and illegal) IPv4 addresses and, of course.

"Shortly after announcing the routes to the world, RIPE's RIS was flooded with over 50 Mbps of traffic destined for what is still an unallocated network; it should not appear on the global Internet."

* http://packetlife.net/blog/2010/feb/5/ripe-plays-with-1-0-0-...

So, the traffic is being sniffed and analyzed, and that service was advertised as privacy-oriented?

"The service" is the DNS service. If you send random garbage to random IP addresses I think you waive the right to privacy.

>APNIC gets to see the noise as well as the DNS traffic

>Huston emphasised that APNIC intends to protect users' privacy. "DNS is remarkably informative about what users do, if you inspect it closely, and none of us are interested in doing that," he said.

Maybe it is reasonable to take them at their word as they seem trustworthy, but we should at least consider the fact that at least some of this DNS traffic is indeed being analyzed.

No, wait.

Users of the DNS service get the privacy guarantee.

Non-users do not. If you floodping you are not a user of the DNS service and the privacy terms don't apply to you. Rather you're a member of the Misconfiguration Club, and the site you're pinging has the usual right to analyse your pings.

What if somebody has a bad DNS resolver and what he qualifies as a valid DNS request, researchers do not.

I get the general idea, but having "user-privacy oriented" and "we collect everything and make it available to many researchers" services under the same IP may lead to some issues.

Even a bad DNS resolver will still send to port 53. The privacy policy probably applies to anything on ports 53, 80, and 443.

and the DNS over TLS port, 853

Oh, in that case you can apply those issues to all of Cloudflare. They serve many thousands of websites from each node. God only knows how many different privacy policies may apply depending on which bytes you send to TCP port 80.

I'm pretty sure that all the traffic is being analyzed. The only thing they publicly committed to is not saving your Ip address.

> "Under the terms of a cooperative agreement, APNIC will have limited access to query the transaction data for the purpose of conducting research related to the operation of the DNS system."


DNS traffic, no. Random garbage traffic misdirected to, APNIC is studying.

Does all this garbage traffic affect the performance of Cloudflare's servers? There must be some cost (performance and $$$) to filter this traffic. Was that a consideration when deciding whether to use instead of some other IP address? :)

Vendors like Cloudflare have usually really easy to use and cheap measures to drop traffic otherwise they would not be able to provide DDOS protection. Usually they have BGP or other means to propagate blacklisted IP ranges to peers meaning that not even their peers will route the garbage towards them. This is how you can survive DDOS that is bigger than your pipe while serving legit traffic (coming from different ranges than the ones you blacklisted).

No. We have a lot of capacity. A lot.

For ordinary singlehomed users who don't get the "a lot". As an example cloudflare has 40Gbps of capacity to the SIX in Seattle. I would guess that they also have direct, at minimum, 10Gbps PNI peering sessions with other huge ISPs in the Pacific Northwest which never see the SIX fabric. So probably add another 20 individual 10GbE circuits at bare minimum to that 40 figure. All of which helps spread the traffic load out rather than shoving it all down a few pipes.


that's just the public stuff!

For huge entities like this and other top 5 CDNs it makes me wonder how many full time staff positions are dedicated to buying rack space, power and crossconnect in major colo facilities worldwide. How many contract law experts, telecom real estate analysts, etc. That before you even get into things like experts in Japanese contract law. Just for rack, power and facilities at layer one of the osi model before any networking happens.

I don’t know the exact number, but you’re off by about an order of magnitude roughly. Cloudflare peering is in the terabits/sec range globally.

At many non profit IXes the interface size to the fabric is public data and published by both the IX and on peeringdb. Such as at the SIX. They have not yet upgraded to 1x100GbE.

Another regional example, they have 20Gbps to the VANIX.

What is opaque is the size and scale of their PNI peering, which parties generally don't share. For example in a mid sized city where Comcast is the cable monopoly they almost certainly use a 100GbE interface direct to Comcast for just that isp.

Yes the scale is terabits globally. But it is highly decentralized.

> But it is highly decentralized

Isn’t that the entire point of a CDN, to have decentralized POPs scattered globally?

Yes, they may max out at 100gb per public IX in most cases, but they still have lots of 100gb peers all over the globe.

Yes, it's exactly the point. What I was saying is I am not off by an order of magnitude, I know exactly how big they are. Two ASes I do engineering work for peer directly with cloudflare.

> I am not off by an order of magnitude, I know exactly how big they are.

Your post implied a total of 200-400gb, the real number is 10x that (otherwise known as “order of magnitude”). I’m not disputing your knowledge or experience, but the post as written has issues.

no, my post was quite specifically about a single IX point in one geographical location, and was accurate. You confused me talking about interface sizes and counts at one IX point with cloudflare as a whole.

What difference does it how much they have as peering in one specific geo? Are you going to posit then that that is their capacity in that geo? I hope not, as Cloudflare has their own backbone and could easily and efficiently exchange traffic outside the geo depending on peering agreements.

Cloudflare is anycast announcing the space from something like 30 to 50 unique POPs worldwide, so the volume of shit traffic is significantly decentralized. It's not like 40 Gbps is hitting one location.

They scaled up recently and now have ~150 POPs. They specialised on DDOS defense from the start so traffic volume isn't an issue for them. Everything below 100gbit/s globally will probably be well within normal fluctuations. But I guess they talk to ASNs that send a lot of garbage and ask them to investigate on their side.

Sure, but they could also study DNS traffic if they wanted to. Or at least, with Cloudflare's cooperation.

If you want privacy, you never do DNS queries from an ISP-assigned IP address. Tor exits do DNS queries on behalf of clients. Decent VPN services also handle DNS queries for clients.

They don’t get raw DNS traffic. Ever.

But that's just a Cloudflare policy, isn't it?

Or are you arguing that even Cloudflare couldn't get raw DNS traffic?

That’s our policy and we’ve hired outside auditors to ensure we’re honoring it. If you have suggestions of what else we can do to prove we’re a company of our word, LMK.

Oh, I didn't realize that you're a CloudFlare cofounder.

I don't mean to question CloudFlare's integrity.

It's just that, for claims about privacy, I'd rather depend on more than trusting any one party.

Great. Use and You don’t depend on one party now?

Yes, thanks.

Even if you just use you know you have 2 parties, cloudflare and their auditor.

There is that argument. But both are likely vulnerable to coercion from some adversaries. And of course, that's always a risk.

Will these auditors guarantee that you won't wake up one morning after a troublesome sleep and start monitoring this to protect us against Nazis? Because that might be good to ensure we believe your word.

I have a problem with these comments because they're basically a false dichotomy. No, CloudFlare can't ensure they play fair. Can your ISP? Who can?

Because these sorts of comments read to me as "yeah, you're the best right now, but are you perfect? No.". Nobody claimed perfection, and there's value in being the best.

Precisely. There's no perfect. OK, so I use IVPN. I've known a principal for many years. I write stuff for them. And I trust them to protect my privacy. More than any other VPN service. I also trust a few others, almost as much. And way more than I trust my ISP.

But I never depend on any one of them. I use nested chains. That's my no means perfect, because routes are relatively static, unlike Tor with its frequently changing circuits. But the basic idea is the same. Compromise would depend on collusion, perhaps forced, of multiple parties. Or some serious traffic analysis.

Here, there's CloudFlare, its auditors, and perhaps Google. So maybe there is distributed trust in that. But still, I'm happier to put more independent parties between me and them. Tor, or at least a nested VPN chain.

Cloudfares CEO was guaranteeing that they have an external auditors to ensure the company keeps its word. I'm asking whether the same auditors can ensure that the CEO himself won't decide one morning to go against this word wrt this topic. In the same way that a ToS might state "we will never sell your data to third parties" and a CEO can't break that promise, what guarantee beyond "some external people are looking at what we do" and "oh, just trust me" do we actually have? Im asking if the auditors will also guarantee the CEOs word.

This is relevant as the CEO has previously woken up one morning after a troublesome sleep and it has been argued, gone against his word (for good and anti nazi reasons). Many argue that he was entirely within his right to do so, and he was! So in this case, would he be entirely within his right to start monitoring all that data. As he asks, paraphrasing: "what more can I do to ensure my word is good."

I'd bet that if they one day drop that promise from their page they'll quickly hit the HN front page. If you don't trust, just monitor their policy page for diffs.

Will they at least get processed information about the DNS traffic (like how much of it is normal recursive queries versus broken garbage which happened to go to port 53)?

It was reworded enough times to make their promise vague and not well defined.

I’m Cloudflare’s CEO. What questions do you have?

I’ll start: do we ever store’s users’ IPs? No. They’re never written to disk. And APNIC never has access to them.

What data do you provide to APNIC? We give APNIC reports on non-DNS data that’s hitting It includes information like: what protocols are sending data to the IP, what’s the volume, where it it coming from?

For DNS users of, we never provide APNIC any identifying information. We don’t upload any data to them. While they can query for questions like: “How many queries came from India in the last 24 hours?” they can’t query anything on a specific user.

If you have concerns, ask them here. I’ll answer.

Are you aware that your public resolvers are actively breaking DNS-based GeoIP (striping EDNS0-ECN and not using source IPs geo-localized as the requester would be)? and if so, what is the rationale for it?

Yeah I tested it out and switched back, it made performance to Twitch in particular quite bad for me. I don't get that issue with Google DNS though.

Google's take good care of it... and they explain precisely what they do on the topic...


https://developers.google.com/speed/public-dns/faq#locations (when EDNS0/ECS isn't supported)

The tin foil hat brigade might suggest that this is deliberate to ensure that only what's served by cloudflare gets to be fast...

Why is healthy speculation now considered "tin foil hat brigade"?

Conspiracy theory isn't a dirty word.

Speculation keeps people informed and alive.

I think everyone's concern should be that Cloudflare or Google might just replace each IP in their 24h logs with a random UUID and call it anonymized. Both companies can potentially store enough information to correlate DNS requests with regular traffic logs.

It doesn't take much guessing to know who sent an anonymous DNS request for example.com to one of your countless PoPs if your CDN logs a HTTP GET request to www.example.com at the same location a few milliseconds later.

We're not storing the source IP addresses in any form. Not raw, not "transformed", not anonymized, not hashed. They are not being stored.

Our business is not about tracking people; it's about selling our service to businesses to make their web sites/APIs/applications faster and more secure.

From https://developers.cloudflare.com/

"Specifically, APNIC will be permitted to access query names, query types, resolver location and other metadata via a Cloudflare API, that will allow APNIC to study topics like the volume of DDoS attacks launched on the Internet and adoption of IPv6."

I interpret "query names" as some values obtained from DNS queries hitting, e.g. "foo.example.com".

Is your answer to "What data do you provide to APNIC?" complete in the statement above?

Thanks for clarifying.

No querying IPs, ever. Nothing personally identifiable. APNIC can query things like: how many DNS queries come from the UK? How many query for google.com?


Are the gigabytes of junk billions of tiny requests or are there large requests as well?

Are you finding it more difficult than expected to manage the data?

I'm a customer since you launched, thanks a lot for it.

Nope. We have a lot of excess capacity. Doesn’t increase our costs. But, that’s a longer conversation…

Neteng here: without seeing the traffic charts for individual interfaces and aggregations, I would bet cloudflare has a shitload of excess inbound capacity. They are a content pushing CDN. I would bet that at a major IX where they have one 100Gbps port that their out:in ratio is 90:10 or greater. So if they only have 6-9 Gbps of traffic inbound on a 100GbE port to a fabric consisting of 80+ bgp peers, they have a lot of extra capacity to absorb unwanted inbound traffic before it becomes an operational concern.

Have seen edge traffic charts for major porn hosting companies and the out:in traffic ratio is like 97:3

How do you know where the requests came from without IP addresses? Do you log ASN or something?

We keep IPs in memory to help stop abuse and debugging ng other issue, but we promise to purge all logs within 24 hours or less.

>they can’t query anything on a specific user.

What exactly do you mean by "user"? Can they query DNS traffic by IP address / subnet? Exactly what are all of the restrictions there?

EDIT: Is there a whitelist of things they can query by or do you simply trust them to be good citizens, have a binding legal agreement, all of the above?

No. We have a legally binding agreement. And, more importantly, we don’t store or give them access to IPs or anything else that may be associated with any individual. Look at a DNS query, look at what could be identifying — let us know where concerns are. My hunch is we’ve thought of it. If not, we will fix. We don’t want personally identifiably info. It creates a legal risk for us. We purge it as quickly as we can.

We don’t want personally identifiably info. It creates a legal risk for us. We purge it as quickly as we can.

Ding ding ding, we have a winner. If more people would realize this, we would have less data breaches. To get there, a data breach must become more costly for the companies.

Is there personal data in the domain names queried, as opposed to the IP addresses querying? I think there probably is. For example, consider the bug in iTerm2 that ran a dns query for any text cmd-clicked on.

To protect against that, could you commit not to log to disk any queries that come from fewer than N ips in 24 hours, and not to expose rare queries to APNIC or internally? (Not a demand, just brainstorming mitigation.)

It might also be fun to give an internal team access to the data you consider safe, and challenge them to dig up personal data from it.

Do you plan to publish usage statistics over the coming weeks? I'd be keen to know what kind of (valid DNS) volumes you've seen since the announcement. And if there are any obvious patterns of ASNs or countries using it a lot (unless you don't want to disclose that to avoid being blocked there).

> I’ll start: do we ever store’s users’ IPs? No. They’re never written to disk.

Is there a guarantee that this will always be the case? Might there, in theory, be a point in the future where users' IPs are collected and stored?

Loving, btw.

Does APNIC get a sample of the raw packets of non-DNS and non-HTTP/HTTPS/QUIC protocols, so they can figure out what they actually are and why they're being sent towards that network?

I expect that no human from APNIC or Cloudflare will ever look at raw data from, nor will any of it be recorded, or used in aggregated data that retains any personal information.

As personal information count full IP addresses, the content of requests, or any set of data that can be used to recover these.

That is what "we respect your privacy" means.

Neither we nor APNIC can query “what” or “how many” requests from any IP have been made.

We can query things like:

1. How much query traffic is from Africa? 2. What’s the peak time of query traffic? 3. What are the most popular DNS authoritative servers?

If you have specific concerns, please raise them here.

What's in it for you guys? How do you make money off of Thanks!

Brand. How much would you pay if you were us to associate your brand with privacy/security and speed?

Performance. Our core business is making our customers fast and safe. More people using means our Authoritative DNS service inherently faster for anyone who uses it.

Recruiting. Our mission is to help build a better Internet. Lots of places the people on our team can work. That they work for us is often because employees believe in our mission. helps with that.

> Our mission is to help build a better Internet.

I've been working a lot with open data and I have huge problem with Cloudflare ruining open internet with bot protection and such. The issue I have is that public data is public, be it bot or human.

I'm having real issue with you guys saying that your mission is to better the internet when you break shitton of floss apps that are essentially harmless and people who want to do harm, crawl at huge rates for commercial purposes break your systems like it's made of twigs. I'm saying this as a person who works on both sides and can't help but call you out.

So sorry unless you turn to non-profit I'm really not buying your "helping the internet" song.

I agree. Part of what makes the web the web is that anyone has to be able to crawl its data, and build new technology on top.

As long as Cloudflare blocks that, it's hostile.

Maybe a way to register as spider with Cloudflare, and get a token that one passes in the header, with strict rate limits would be much better than the current solution of just showing captchas.

I guess they don't, they just make a better internet, which helps their customers of their other services, and the rest of us. Seems like so good old fashioned altruism to me.

If your ISP doesn't support IPv6, just try sending RA packets upstream and see what happens. If they're doing it wrong using blacklist instead of whitelist, then it might well leak. It's good to notice, that this doesn't affect IPv4 networking in any way.

I'm not a networking guy, but I'd like to try this. Can you explain how you would do it? (which tools, or a link to some docs would be nice)

Using radvd [1] is the easiest way with Linux. Or you can get it done using ICS on Windows. Personally I used burner laptop, with live distribution and runned radvd. Or if you like details, you can use Python Scapy on Linux to send RA packets. [1] http://www.litech.org/radvd/

Thanks. I'm on Linux, so I'll check out radvd.

> "Some folk, without any material to justify it, started configuring Now, I can start using your IP address, I suppose, but we're both going to have a problem," Huston told ZDNet, laughing.

Ha, I was using as my local intranet as an expirement with dnsmasq a few years back. I got scolded for it, rightly so, but I figured as no-one was using at the time it was OK.

I see I am not the only one who did so.

Who scolded you for it? Wouldn't that just mis-route legitimate traffic to locally instead of to the internet?... but you probably had no legitimate traffic there anyway, right?

Who? I don't remember. Someone on HN/reddit. It was a mild scolding in the same vein as in the article.

And yes, that's what it would do. And no, I didn't. Made it easier to type in IPS, that's for sure.

Oh, okay... I thought it was your ISP or something. That is why I was wondering how anyone would even notice.

Get a better ISP.

Using for a DNS service was a bad idea. Now all this garbage traffic is being routed. (Before, it would just be dropped closer to the edge.)

That was part of the reason they used it. They partnered with the company that owned it and wanted to study the junk but couldn't handle the traffic. Cloudflare got for DNS use and helped handle the junk traffic so they could study it.

Awesome, the worlds biggest honeypot? There is literally a finite amount of bandwidth in the existance, let Cloudflare have as much cruft as it wants.

Ignoring the Finite, but years ago, I was introduced to a company that basically did not use a whole routed and public /8 except for... honeypot research.

Was a blast to see some of it and the warstories about how they where able to do some early warnings of nasties that where about to wreak havoc.

Gave them a good insentive to actually design a pretty good toolingset, that never made it past "On Demand Innovation Services" which helped me to clean out a network with a zoo of malware strains. Still would implement it in the different networks I frequent, if it was available.

It seems you grasped my poorly worded comment. I think Cloudflare knows what they are doing, and I am happy they are taking a lot of the focus towards themselves, this good for everyone. And I hope they learn alot about dealing with nefarious traffic.

I have no idea where you get the idea there is a finite amount of bandwidth available. It is not coal or molybdenum. ISPs are continually being expanded.

Just because it is expanding, doesn't mean it isn't finite.

The current bandwidth is finite. The future bandwidth is finite. Even if we use all the resources available to us, expanding at the speed of light to capture those resources, it's still finite.

From a pure physics perspective, yes. But do take the time to familiarize yourself with the many THz of bandwidth that is available in one singlemode strand, and how many coherent modulated, 400GbE links can fit in a typical dwdm bandplan.

The internet is continually expanding at OSI layer 1. It is a construction project. The bandwidth is growing faster than our ability to fill it.

IPv6 address space is also finite. but a handful of /64s is a paltry slice of it all.

Well my comment got completely misunderstood.

There are only so many criminals in the world, period. They only have so much bandwidth, either through stealing it or buying it.

Also, I am happy Cloudflare is doing this, they are, at the very least, taking resources away from the criminals that could be attacking others and doing real harm.

From a marketing point of view, I think it was a brilliant move from Cloudflare to get the address. Clearly better than!

But from a user perspective, why couldn't they have just let that address be... So many things are going to break just because Cloudflare wants a pretty IP. Sure, the things that break were using a hack, but in my opinion that doesn't automatically make it okay to break it.

Now I'm just waiting for a startup to launch a Stack Overflow competitor on example.com...

example.com (and other example.*) is reserved for documentation purposes, i.e. you can't buy it.

Like blocks (TEST-NET-1), (TEST-NET-2) and (TEST-NET-3) from rfc5737? Or is it to new, maybe from rfc2119 is better? There is a few test-nets for documentation, just like example.com.

Just like used to be null routed?

Not exactly; example.com and .org are reserved in RFC 2606[1]. is not listed in the special use RFC[2], it's just an address previously unused by APNIC.

[1]: https://tools.ietf.org/html/rfc2606

[2]: https://tools.ietf.org/html/rfc5735 goes pretty well in the Chinese market. (8 being a popular number.) I think is not such a hit.

Eh, I don't think it goes as well as you think, considering it's been banned for years [0].

[0]: https://www.reddit.com/r/sysadmin/comments/2komqe/china_bloc...

True for access from mainland China, though the scope of Chinese culture is a bit larger.

And just knowing that Google has that number is a bit like them owning a license plate or phone number with pure eights...


fa fa fa fa fa far better

What breaks with Some people can't use it because of misconfiguration but it shouldn't break any services?

Many things break because they were abusing things and using a "hack". They deserve to break!

They also provide, easy enough to set that up as your client's backup.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact