Where are you testing from? I'm going to guess: a datacenter. Residential customers won't see anything this fast. I'm in a small town in Kansas, connected by 1 Gbit ATT fiber. I'm getting ~26ms to 1.1.1.1 and ~19ms to my private DNS resolver that I host in a datacenter in Dallas. Google DNS comes in around 19ms.
I suspect that Cloudflare and Google DNS both have POPs in Dallas, which accounts for the similar numbers to my private resolver. My point is, low latencies to datacenter-located resolver clients is great but the advantage is reduced when consumer internet users have to go across their ISP's long private fiber hauls to get to a POP. Once you're at the exchange point, it doesn't really matter which provider you choose. Go with the one with the least censorship, best security, and most privacy. For me, that's the one I run myself.
Side note: I wish AT&T was better about peering outside of their major transit POPs and better about building smaller POPs in regional hubs. For me, that would be Kansas City. Tons of big ISPs and content providers peer in KC but AT&T skips them all and appears to backhaul all Kansas traffic to DFW before doing any peering.
64 bytes from 1.1.1.1: icmp_seq=0 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=128 time=9 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=128 time=2 ms
Google:
64 bytes from 8.8.8.8: icmp_seq=0 ttl=54 time=12 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=11 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=13 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=45 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=54 time=14 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=54 time=11 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=54 time=34 ms
Quad9:
64 bytes from 9.9.9.9: icmp_seq=0 ttl=53 time=10 ms
64 bytes from 9.9.9.9: icmp_seq=1 ttl=53 time=69 ms
64 bytes from 9.9.9.9: icmp_seq=2 ttl=53 time=14 ms
64 bytes from 9.9.9.9: icmp_seq=3 ttl=53 time=58 ms
64 bytes from 9.9.9.9: icmp_seq=4 ttl=53 time=52 ms
One thing I noticed is that when I first pinged 1.1.1.1 I got 14ms, which then quickly dropped to ~3ms consistently:
64 bytes from 1.1.1.1: icmp_seq=0 ttl=128 time=14 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=128 time=14 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=128 time=3 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=128 time=1 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=128 time=4 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=52 time=241.529 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=52 time=318.034 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=52 time=337.291 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=52 time=255.748 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=52 time=247.765 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=52 time=235.611 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=52 time=239.427 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=52 time=247.911 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=52 time=260.911 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=52 time=281.153 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=52 time=300.363 ms
64 bytes from 1.1.1.1: icmp_seq=11 ttl=52 time=234.296 ms
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
Request timeout for icmp_seq 5
Request timeout for icmp_seq 6
Request timeout for icmp_seq 7
Request timeout for icmp_seq 8
Request timeout for icmp_seq 9
Request timeout for icmp_seq 10
$ ping 1.0.0.1
PING 1.0.0.1 (1.0.0.1): 56 data bytes
64 bytes from 1.0.0.1: icmp_seq=0 ttl=50 time=167.359 ms
64 bytes from 1.0.0.1: icmp_seq=1 ttl=50 time=165.791 ms
64 bytes from 1.0.0.1: icmp_seq=2 ttl=50 time=165.846 ms
64 bytes from 1.0.0.1: icmp_seq=3 ttl=50 time=166.755 ms
64 bytes from 1.0.0.1: icmp_seq=4 ttl=50 time=166.694 ms
64 bytes from 1.0.0.1: icmp_seq=5 ttl=50 time=166.088 ms
64 bytes from 1.0.0.1: icmp_seq=6 ttl=50 time=166.460 ms
64 bytes from 1.0.0.1: icmp_seq=7 ttl=50 time=166.668 ms
64 bytes from 1.0.0.1: icmp_seq=8 ttl=50 time=166.753 ms
64 bytes from 1.0.0.1: icmp_seq=9 ttl=50 time=165.670 ms
64 bytes from 1.0.0.1: icmp_seq=10 ttl=50 time=166.816 ms
64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=17.580 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=18.025 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=17.780 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=18.231 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=17.906 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=18.447 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=59 time=22.806 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=23.321 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=24.379 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=25.869 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=24.485 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=24.165 ms
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=57 time=23.005 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=22.867 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=24.461 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=23.680 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=35.581 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=21.033 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=57 time=41.634 ms
ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=1.36 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=1.32 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=1.34 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=1.38 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=58 time=1.37 ms
ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=1.33 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=1.38 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=1.35 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=1.36 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=1.35 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=5.044 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=6.447 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=60 time=6.371 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=60 time=6.308 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=60 time=7.317 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=60 time=5.989 ms
Dubai:
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=48.728 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=48.450 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=47.266 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=45.320 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=46.470 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=55 time=14.053 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=12.715 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=55 time=13.615 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=55 time=14.018 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=55 time=12.261 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=55 time=11.428 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=55 time=11.950 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=55 time=13.034 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=55 time=13.679 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=55 time=12.415 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=55 time=12.088 ms
Pinging 1.1.1.1 with 32 bytes of data:
Reply from 89.228.6.1: Destination net unreachable.
Reply from 89.228.6.1: Destination net unreachable.
Reply from 89.228.6.1: Destination net unreachable.
Reply from 89.228.6.1: Destination net unreachable.
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=61 time=15.860 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=61 time=15.799 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=61 time=15.616 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=61 time=15.769 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=61 time=15.431 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=61 time=16.459 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=61 time=15.860 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=61 time=15.930 ms
Tokyo, domestic 2Gbps FO but connected through Wifi:
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=5.531 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=4.420 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=5.450 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=5.438 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=4.231 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=5.933 ms
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=57 time=6.440 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=4.574 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=4.684 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=4.992 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=5.942 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=5.955 ms
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=58 time=111.781 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=102.982 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=102.206 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=110.135 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=110.085 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=58 time=6.886 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=5.475 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=58 time=5.674 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=58 time=5.557 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=58 time=7.066 ms
$ ping 9.9.9.9
PING 9.9.9.9 (9.9.9.9): 56 data bytes
64 bytes from 9.9.9.9: icmp_seq=0 ttl=58 time=5.880 ms
64 bytes from 9.9.9.9: icmp_seq=1 ttl=58 time=5.534 ms
64 bytes from 9.9.9.9: icmp_seq=2 ttl=58 time=5.251 ms
64 bytes from 9.9.9.9: icmp_seq=3 ttl=58 time=5.194 ms
64 bytes from 9.9.9.9: icmp_seq=4 ttl=58 time=5.698 ms
Something interesting I saw pointed out on the reddit thread about this is the ttl between 1.1.1.1 and 8.8.8.8 is the ttl is way different.
Your pings also have the same thing showing up 128 vs 53. I tried on my laptop and get something simmilar. traceroute to 1.1.1.1 is 1 hop which is wrong. 1.0.0.1 shows a few hops.
Unless you tell it not to, ping will try a reverse lookup on the IP you are pinging in order to display that to you in the output. It's a good idea to keep that in mind when you ping something, especially if you notice the first ping is abnormally slow.
Perhaps that depends on operating system. In the 30 years I have been using ping on Linux, the reverse lookup time is absolutely included in the first ping time.
I think AT&T's fiber modems are using 1.1.1.1. I'm getting < 1ms ping times and according to Cloudflare's website there's no data center close enough to me for that to be possible without violating the speed of light.
what happens if you go to https://1.1.1.1 in a browser? It should have a valid TLS cert and have a big banner that says, among other things, "Introducing 1.1.1.1". If your ISP's CPE or anything else is fucking with traffic to that IP, it wont load/display that
Call your ISP and ask them why they're blocking access to some websites. Ask them if there are any other websites they're blocking. Tweet about it. Etc
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=10.8 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=11.3 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=10.7 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=10.9 ms
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=11.3 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=60 time=11.1 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=60 time=10.5 ms
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=7.65 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=8.53 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=10.2 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=8.04 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=7.92 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=59 time=7.85 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=59 time=7.88 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=59 time=7.73 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=59 time=7.73 ms
BigPipe, Spark, Skinny and Vodafone don't believe in peering and thus don't peer with Cloudflare at APE. If you wanted the best performance then 2degrees, Orcon, Voyager or Slingshot are the best for this since they peer.
iMac ~ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=64 time=0.688 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=0.814 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=1.153 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=64 time=0.752 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=64 time=0.755 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=64 time=0.789 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=64 time=0.876 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=64 time=0.869 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=64 time=0.830 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=64 time=1.387 ms
--- 1.1.1.1 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.688/0.891/1.387/0.204 ms
Pinging 8.8.8.8 averages 8ms. CloudFlare must have a POP here in Nashville?
That's probably because AT&T is using 1.1.1.1 for something internal and breaking the public internet for it's users: you get a really fast ping on 1.1.1.1, but it's not the 1.1.1.1 you are trying to reach.
That's impressive. My AT&T wifi router caps bandwidth at 300mb/s (instead of 1gbs on ethernet) and add 10-20 ms to latency. And this is standing next to it and using 5ghz.
I'm guessing Google's resolvers are a little busier than Cloudflare's right now, because pretty much nobody not on HN right now is hitting them. Will be a more interesting comparison in 6 months.
I'd be surprised if increased load has a negative effect on 1.1.1.1's performance.
We run a homogeneous architecture -- that is, every machine in our fleet is capable of handling every type of request. The same machines that currently handle 10% of all HTTP requests on the internet, and handle authoritative DNS for our customers, and serve the DNS F root server, are now handling recursive DNS at 1.1.1.1. These machines are not sitting idle. Moreover, this means that all of these services are drawing from the same pool of resources, which is, obviously, enormous. This service will scale easily to any plausible level of demand.
In fact, in this kind of architecture, a little-used service is actually likely to be penalized in terms of performance because it's spread so thin that it loses cache efficiency (for all kinds of caches -- CPU cache, DNS cache, etc.). More load should actually make it faster, as long as there is capacity, and there is a lot of capacity.
Meanwhile, Cloudflare is rapidly adding new locations -- 31 new locations in March alone, bringing the current total to 151. This not only adds capacity for running the service, but reduces the distance to the closest service location.
In the past I worked at Google. I don't know specifically how their DNS resolver works, but my guess is that it is backed by a small set of dedicated containers scheduled via Borg, since that's how Google does things. To be fair, they have way too many services to run them all on every machine. That said, they're pretty good at scheduling more instances as needed to cover load, so they should be fine too.
In all likelihood, what really makes the difference is the design of the storage layer. But I don't know the storage layer details for either Google's or Cloudflare's resolvers so I won't speculate on that.
> In fact, in this kind of architecture, a little-used service is actually likely to be penalized in terms of performance because it's spread so thin that it loses cache efficiency
This is exactly what I'm seeing with the small amount of testing I'm doing against google to compare vs cloudflare.
Sometimes google will respond in 30ms (cache hit), more often than not it has to do at least a partial lookup (160ms), and sometimes even go further to (400ms.)
The worst I'm encountering on 1.1.1.1 is around 200ms for a cache miss.
Basically, what it looks like is that google is load balancing my queries and I'm getting poor performance because of it - I'm guessing they simply need to kill some of their capacity to see increased cache hits.
Anecdotally I'm at least seeing better performance out of 1.1.1.1 than my ISP's (internode) which has consistently done better than 8.8.8.8 in the past.
Also anecdotally, my short 1-2 month trial of using systemd-resolved is now coming to a failed conclusion, I suspect I'll be going back to my pdnsd setup because it just works better.
ICMP round-trip times don't necessarily prove anything - you need to be examing DNS resolution times.
Lots of network hardware (i.e., routers, firewalls if they're not outright blocking) de-prioritise ICMP (and other types of network control/testing traffic) and the likelihood is that Google (and other free DNS providers) are throttling the number of ICMP replies that they send.
They're not providing an ICMP reply service, they're providing a DNS service. I'd a situation during the week where I'd to tell one of our engineers to stop tracking 8.8.8.8 as an indicator of network availability for this reason.
Note, from Google Compute Engine use 8.8.8.8 as it should always be faster. I'm guessing the 8.8.8.8 service exists in every Google Cloud region. Even better use the default GCE autogenered DNS IP that they configure in /etc/resolv.conf to get instance name resolving magic.
Usually best to use 169.254.169.254, which is the magic "cloud metadata address" that talks directly to the local hypervisor (I think?). That will recurse to public DNS as necessary. https://cloud.google.com/compute/docs/internal-dns
I agree that's usually best, but one exception is worth noting: if you want only publicly resolvable results, don't use 169.254.169.254. That address adds convenient predictable hostnames for your project's instances under the .internal TLD.
Also, no need to hardcode that address - DHCP will happily serve it up. It also has the hostname metadata.google.internal and the (disfavored for security reasons) bare short hostname metadata.
The "backup" IPv4 address is 1.0.0.1 rather than, say, 1.1.1.2, and why they needed APNIC's help to make this work
In theory you can tell other network providers "Hi, we want you to route this single special address 1.1.1.1 to us" and that would work. But in practice most of them have a rule which says "The smallest routes we care about are a /24" and 1.1.1.1 on its own is a /32. So what gets done about that is you need to route the entire /24 to make this work, and although you can put other services in that /24 if you _really_ want, they will all get routed together, including failover routing and other practices. So, it's usually best to "waste" an entire /24 on a single anycast service. Anycast is not exactly a cheap homebrew thing, so a /24 isn't _that_ much to use up.
I'm in a city in southern Japan (so most of my traffic needs to go to Tokyo first), on a gigabit fiber connection.
--- 1.1.1.1 ping statistics ---
rtt min/avg/max/mdev = 30.507/32.155/36.020/1.419 ms
--- 8.8.8.8 ping statistics ---
rtt min/avg/max/mdev = 19.618/21.572/23.009/0.991 ms
The traceroutes are inconclusive but they kind of look like Google has a POP in Fukuoka and CloudFlare are only in Tokyo.
edit: Namebench was broken for me, but running GRC's DNS Benchmark my ISP's own resolver is the fastest, then comes Google 8.8.8.8, then Level3 4.2.2.[123], then OpenDNS, then NTT, and then finally 1.1.1.1.
Pinging 1.1.1.1 with 32 bytes of data:
Reply from 1.1.1.1: bytes=32 time=45ms TTL=53
Reply from 1.1.1.1: bytes=32 time=45ms TTL=53
Reply from 1.1.1.1: bytes=32 time=45ms TTL=53
Reply from 1.1.1.1: bytes=32 time=45ms TTL=53
Ping statistics for 1.1.1.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 45ms, Maximum = 45ms, Average = 45ms
Pinging 1.0.0.1 with 32 bytes of data:
Reply from 1.0.0.1: bytes=32 time=46ms TTL=54
Reply from 1.0.0.1: bytes=32 time=46ms TTL=54
Reply from 1.0.0.1: bytes=32 time=46ms TTL=54
Reply from 1.0.0.1: bytes=32 time=46ms TTL=54
Ping statistics for 1.0.0.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 46ms, Maximum = 46ms, Average = 46ms
Pinging 8.8.4.4 with 32 bytes of data:
Reply from 8.8.4.4: bytes=32 time=29ms TTL=56
Reply from 8.8.4.4: bytes=32 time=29ms TTL=56
Reply from 8.8.4.4: bytes=32 time=29ms TTL=56
Reply from 8.8.4.4: bytes=32 time=29ms TTL=56
Ping statistics for 8.8.4.4:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 29ms, Maximum = 29ms, Average = 29ms
Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=21ms TTL=56
Reply from 8.8.8.8: bytes=32 time=21ms TTL=56
Reply from 8.8.8.8: bytes=32 time=21ms TTL=56
Reply from 8.8.8.8: bytes=32 time=21ms TTL=56
Ping statistics for 8.8.8.8:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 21ms, Maximum = 21ms, Average = 21ms
Pinging 208.67.220.220 with 32 bytes of data:
Reply from 208.67.220.220: bytes=32 time=45ms TTL=54
Reply from 208.67.220.220: bytes=32 time=46ms TTL=54
Reply from 208.67.220.220: bytes=32 time=45ms TTL=54
Reply from 208.67.220.220: bytes=32 time=50ms TTL=54
Ping statistics for 208.67.220.220:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 45ms, Maximum = 50ms, Average = 46ms
Pinging 208.67.222.222 with 32 bytes of data:
Reply from 208.67.222.222: bytes=32 time=61ms TTL=54
Reply from 208.67.222.222: bytes=32 time=61ms TTL=54
Reply from 208.67.222.222: bytes=32 time=61ms TTL=54
Reply from 208.67.222.222: bytes=32 time=61ms TTL=54
Ping statistics for 208.67.222.222:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 61ms, Maximum = 61ms, Average = 61ms
I would be interested to hear from google (8.8.8.8) how much ping traffic that address gets ...
I know that I will quickly ping 8.8.8.8 as a very quick and dirty test of network up ... its just faster to type than any other address I could test with.
It looks like you are testing either from centers where cloudflare has servers or exchanging traffic with, which is likely true in a data center given the traffic it transports. What most users want is the ping time from home/office.
DNS-over-HTTPS doesn’t make as much sense to me as DNS-over-TLS. They are effectively the same thing, but HTTPS has the added overhead of the HTTP headers per request. If you look at the currently in progress RFC, https://tools.ietf.org/html/draft-ietf-doh-dns-over-https-04, this is quite literally the only difference. The DNS request is encoded as a standard serialized DNS packet.
The article mentions QUIC as being something that might make HTTPS faster than standard TLS. I guess over time DNS servers can start encoding HTTPS requests into JSON, like google’s impl, though there is no spec that I’ve seen yet that actually defines that format.
Can someone explain what the excitement around DNS-over-HTTPS is all about, and why DNS-over-TLS isn’t enough?
EDIT: I should mention that I started implementing this in trust-dns, but after reading the spec became less enthusiastic about it and more interested in finalizing my DNS-over-TLS support in the trust-dns-resolver. The client and server already support TLS, I couldn't bring myself to raise the priority enough to actually complete the HTTPS impl (granted it's not a lot of work, but still, the tests etc, take time).
Some ISPs block outbound DNS from customers to anywhere but their resolvers, filtering based on target port.
This is a particularly common trick in countries that attempt to censor the internet.
It's a lot harder to do that with DNS-over-HTTPS because it looks like normal traffic.
That said, in this case ISPs can just null route the IP address of the obvious main resolvers such as 1.1.1.1. I imagine most of the benefit is surely to people who can spin up their own resolvers.
When we add TLS on top of the protocol, ISPs can only filter based on port at that point. We can run DNS on 443 if that helps, but as you said, static well-known IPs can then be blocked.
> I imagine most of the benefit is surely to people who can spin up their own resolvers.
There are already many easily run DNS resolvers available. Is there a benefit you see in operating them over HTTPS that improves on that?
This is really the elephant in the room. For all we know, ISP bad-actors have never cared about DNS for data-collection purposes, and they're already using SNI to gather data to sell to marketers. I think it's absolutely crucial to find a way to (at least optionally) send SNI encrypted to the server.
The client that sends SNI is, AFAIK, the browser or a similar piece of software. Some older browsers don't support SNI so they can only access single-vhost-per-ip over https.
This means you'll have a really hard time trying to get rid of SNI system-wide, what with a lot of minor apps making their own https connections (granted, on Android or iOS they probably use a common API, but not on a computer).
Well with SNI the concern isn't DNS. Any TLS connection that supports SNI (basically everything that isn't ancient) would have to be fixed. Also, ANI is a pretty useful thing to have and getting rid of it doesn't exactly fix much. Without SNI the server only has the destination IP address to determine which site and thus which certificate to send to the client. Having https sites with multiple certificates hosted on one IP address only works because of SNI. You would break a large portion of the web by disabling it. Also, even if you do disable SNI, the server still sends back the certificate with the domain names in it. And even if you ignore all of that, there's still reverse DNS which will probably be accurate if they send mail from that server and you can always do a DNS lookup for every domain name there is to get a map of which domains point to a given IP. Due to DNS based geolocation that won't work for every site but the sites using that are going to be big enough to find their IP address ranges via another method.
In short, there's really no good solution here but an amendment to TLS could conceivably make it to where it wouldn't be possible to narrow it down to which site that an IP address hosts the user was visiting. That could actually be good enough for traffic to e.g. cloudflare.
Non-rooted android, you have to set a static IP for every network and then there will be an option to enter DNS names. They default to Google DNS
Static IP settings are under advanced.
I suppose there is also domain fronting [1], but it won't be fast or an easy-to-remember IP address anymore. And if you need that, you might need a VPN anyway?
Yes! And I plan to actually build in a default setup to use that now that it exists. I should have mentioned up front that this is the most exciting thing to me in the announcement.
This is a very exciting development, thank you for posting this.
I've implemented DNS before. Doing this saves an entire 300 lines of code. At the same time, it makes the DNS server much more complicated. On top of that, implementing a compliant posix libc will now either use a completely different code path, or pull in a huge amount of code to implement HTTP, HTTP/2, and QUIC. If the simpler, cleaner, and more performant route is taken, it willgbreak when someone screws up "legacy" dns without noticing, because it works in the browser.
It's not worth the complexity of multiple protocols that do the same thing. And it's not worth making the base system insanely complicated so that the magic 4 letters 'http' can show up.
TLS? Yeah, since the simpler secure DNSes failed, we might as well use that. But let's try to keep http complexity contained.
It seems like crappy networks are the norm nowadays, and the preference of the ISPs is to offer the web only. You need a middle box just to access the internet at-large (e.g Tor). Masquerading traffic as web traffic appears to be a good tactic, though inefficient/sloppy.
Yeah, but once everything is tunneled over HTTP it will finally fix the network operator problem once and for all since you can't filter applications using ports.
There are a couple of different approaches. One is DNS-over-TLS. That takes the existing DNS protocol and adds transport layer encryption. Another is DNS-over-HTTPS. It includes security but also all the modern enhancements like supporting other transport layers (e.g., QUIC) and new technologies like server HTTP/2 Server Push. Both DNS-over-TLS and DNS-over-HTTPS are open standards. And, at launch, we've ensured 1.1.1.1 supports both.
We think DNS-over-HTTPS is particularly promising — fast, easier to parse, and encrypted.
Dns over https would be harder for governments and other middleman to block or intercept, despite it being less efficient. It would look like any other https request. Especially if browsers agreed to universally support it.
tls isn't magic, you can still observe the encrypted stream and make assumptions based on bytes sent/received on the wire, protocol patterns and timing. See the crime and breach attack.
Perhaps if the attacker filters traffic first by protocol, it's harder but not at all impossible. I'd guess that DNS-over-HTTPS packets won't be hard to identify by other means.
Thank you for responding, Patrick. As one of the authors of the RFC, your views on this are a great contribution to the conversation.
> rfc 8336
I'll have to read up on this, thanks for the link.
> h2 coalescing
DNS is already capable of using TCP/TLS (and by it's nature UDP) for multiple DNS requests at a time. Is there some additional benefit we get here?
> h2 push
This one is interesting, but DNS already has optimizations built in for things like CNAME and SRV record lookups, where the IP is implicitly resolved when available and sent back with the original request. Is this adding something additional to those optimizations?
> caching
DNS has caching built-in, TTLs on each record. Is there something this is providing over that innate caching built into the protocol?
> it starts to add up to a very interesting story.
I'd love to read about that story, if someone has written something, do you have a link?
Also, a question that occurred to me, are we talking about the actual website you're connecting to being capable of preemptively passing DNS resolution to web clients over the same connection?
this story will evolve as the http ecosystem evolves - but that's part of the point.
wrt coalescing/origin/secondary-certificates its a powerful notion to consider your recursive resolver's ability to serve other http traffic on the same connection. That has implications for anti-censorship and traffic analysis.
Additionally the ability to push DNS information that it anticipates you will need outside the real time moment of an additional record has some interesting properties.
DoH right now is limited to the recursive resolver case. But it does lay the groundwork for other http servers being able to publish some DNS information - that's something that needs some deep security based thinking before it can be allowed, but this is a step towards being compatible with that design.
wrt caching - some apps might want a custom dns cache (as firefox does), but some may simply use an existing http cache for that purpose without having to invent a dns cache. leveraging code is good. There are lots of other little things like that which http brings for free - media type negotiation, proxying, authentication, etc..
> There are lots of other little things like that which http brings for free - media type negotiation, proxying, authentication, etc..
Reading a little between the lines here, would you say that at some point we effectively replace the existing DNS resolution graph with something implemented entirely over http? Where features like forwarding and proxying would have more common off the shelf tooling?
I can start see a picture here that looks to be more about common/shared code, and less about actual features of the underlying protocols.
As a complete layperson, h2 push might be interesting because a DNS resolver could learn to detect patterns in DNS queries (e.g. someone who requests twitter.com usually requests pbs.twimg.com and abs.twimg.com right after) and start to push those automatically when they get the query for twitter.com.
TIL you can also use 1.1 and it will expand to 1.0.0.1
$> ping 1.1
PING 1.1 (1.0.0.1) 56(84) bytes of data.
64 bytes from 1.0.0.1: icmp_seq=1 ttl=55 time=28.3 ms
64 bytes from 1.0.0.1: icmp_seq=2 ttl=55 time=33.0 ms
64 bytes from 1.0.0.1: icmp_seq=3 ttl=55 time=43.6 ms
64 bytes from 1.0.0.1: icmp_seq=4 ttl=55 time=41.7 ms
64 bytes from 1.0.0.1: icmp_seq=5 ttl=55 time=56.5 ms
64 bytes from 1.0.0.1: icmp_seq=6 ttl=55 time=38.4 ms
64 bytes from 1.0.0.1: icmp_seq=7 ttl=55 time=34.8 ms
64 bytes from 1.0.0.1: icmp_seq=8 ttl=55 time=45.7 ms
64 bytes from 1.0.0.1: icmp_seq=9 ttl=55 time=45.2 ms
64 bytes from 1.0.0.1: icmp_seq=10 ttl=55 time=43.1 ms
I don’t actually think it’s in a spec formally but is in a common c lib[0].
> a.b
> Part a specifies the first byte of the binary address. Part b is interpreted as a 24-bit value that defines the rightmost three bytes of the binary address. This notation is suitable for specifying (outmoded) Class C network addresses.
It's not fully true that 127.0.0.1 is the same as 0.0.0.0. For example, binding a webserver to 0.0.0.0 make it on the public network while 127.0.0.1 is strictly localhost.
What I was trying to say is - On Linux, INADDR_ANY (0.0.0.0) supplied to connect() or sendto() calls is treated as a synonym for INADDR_LOOPBACK (127.0.0.1) address.
It's one more letter than a suffix, but as a prefix its a bit clearer. I've known companies to post LAN hostname addresses that way, and in written/printed materials it stands out pretty clearly as an address to type.
It follows the URL standards (no schema implies current or default schema). Many auto-linking tools (such as a Markdown, Word) recognize it by default (though sometimes results are unpredictable given schema assumptions). It's also increasingly the recommendation for HTML resources where you do want to help insure same-schema requests (good example cross-server/CDN CSS and JS links now are typically written as //css-host.example.com/some/css/file.css).
I wish that they talked a bit more about their stance regarding censorship. They have a small paragraph talking about the problem, but they don't talk about the "solution".
While Cloudflare has been pretty neutral about censoring sites in the past (notably, pirate sites), the Daily Stormer incident put them in a though spot[1].
They talk a bit about Project Galileo (the link is broken BTW, it should be https://www.cloudflare.com/galileo), but their examples do not mention topics that would be controversial in western societies, and the site is quite vague.
Would they also protect sites like sci-hub, for example?
While I would rather use a DNS not owned by Google, I have never seen any site blocked by them, including sites with a nation-wide block. I hope that Cloudflare is able to do the same thing.
There's a pretty big difference between terminating a business relationship (which is what Cloudflare did to Daily Stormer, and which Google also did a couple days before Cloudflare did) and refusing to answer DNS queries for third-party domains with which there is no business relationship. It's hard to imagine how the former could be used as precedent to compel the latter.
Cloudflare has no interest in censorship -- the whole reason the Daily Stormer thing was such a big deal was because it's the only time Cloudflare has ever terminated a customer for objectionable content. Be sure to read the blog post to understand: https://blog.cloudflare.com/why-we-terminated-daily-stormer/
(Disclosure: I work for Cloudflare but I'm not in a position to set policy.)
I probably should have made a clearer point instead of linking to TorrentFreak.
I did not mean that I was worried that CloudFlare's DNS would start blocking sites whose content they disagree with (although that would also be worrisome).
I'm worried that copyright holders might be able to use the Daily Stormer case as a precedent to force CloudFlare to stop offering services to infringing sites.
If they are able to do that, I can also see them attempting to force CloudFlare to remove DNS entries as well.
Right, as I said, it's hard for me to see how one could be used as precedent for the other given how different the situations are. And if you could use it, you could just as easily do the same against Google DNS.
Bear in mind, they dropped Daily Stormer because they were claiming Cloudflare agreed with their ideology. Which someone in the previous discussion pointed out was a Terms of Service violation.
DNS resolving offers no such terms and no such reason to make such a claim. I don't see that playing here. And bear in mind, when the CEO did it, he wrote about how dangerous it was that companies had that power. I don't feel other companies running other DNS services hold that level of concern or awareness.
When you consider that their "competitor" in the space of free DNS resolvers with easy-to-remember IPs is Google, who recently tried blocking the word "gun" in Google Shopping... it's hard not to see the introduction of a Cloudflare DNS resolver as at least a net positive for resisting censorship. And more options is almost always better.
Cloudflare is a private company and they're free to do what they want but their reasoning for the Daily Stormer termination felt like a convenient excuse to me. I'm sure that it was the best business decision for them but when I read a blog post touting 1.1.1.1 as being anti-censorship, I roll my eyes.
Anti-censorship so long as Matthew Prince doesn't have a bad morning.
I run my own DNS-over-TLS resolver at a trusted hosting provider. It upstreams to a selection of roots for which I have reasonable trust. My resolver does DNS-over-TLS, DNS-over-HTTPS, and plain DNS. Multiple listening ports for the secure stuff so that I have something that works for most circumstances.
I would still take someone who can have a bad morning and decide to censor one site (and then write about how concerning that power is), over entities that regularly view it as their "responsibility" to shut down sites and remove content they find objectionable.
I think it's great if people are running their own DNS. :) But I'm certainly not mad that Cloudflare's offering yet another public alternative. As I said, more choices is better.
Running your own root content DNS server isn't particularly hard, note. The public root content DNS server operators are not interested in serving up dummy answers for all sorts of internal stuff that leaks out to the root content DNS servers any more than you are interested in sending it to them. (-:
My tendency would be to ask for some sort of proof, though I realize asking for proof of nonexistence of evidence is near impossible. I'm inclined at present to place more trust in Cloudflare's word at this point, but I try to keep an open mind. It's always good to know both sides' stories.
Well, you have the CloudFlare blog where Prince states "The tipping point for us making this decision was that the team behind Daily Stormer made the claim that we were secretly supporters of their ideology."[0] So, all that is necessary is to find this statement. I won't link to it but the Daily Stormer has been active on the clear web for most of the time intervening the seizure of their domain and now. Prince never provided any proof for his claim, not even a screenshot. Of course, a screenshot would have given away, via the visual context, that the statement wasn't from the "team" but from a forum commenter presenting the notion in a joking manner.
As it happens, an internal memo "leaked" to the media wherein Prince admitted he pulled the plug on The Daily Stormer because they are "assholes" and admitted that “The Daily Stormer site was bragging on their bulletin boards about how Cloudflare was one of them."[1] These forums are also what served as the area for readers to comment on articles. Ergo, he acknowledged that he knew his statement about the Daily Stormer "team" claiming CloudFlare supported their ideology was a lie.
You also have to go back in time and consider the context in which The Daily Stormer was successively de-platformed. The site had been publishing low-brow racist commentary including jokes about pushing Jews into ovens and referring to Africans as various simian species for years. It was, however, a single article wherein they mocked the woman who died at the Charlottesville, VA conflict between the alt-right and antifa that led to the widespread outrage that resulted in the The Daily Stormer being temporarily kicked off the internet.[2]
At the same time that Cloudflare was banning the Daily Stormer, they were (and still are, AFAIK) providing services to pro-pedophilia and ISIS web sites. The Daily Stormer itself pointed out not only the hypocrisy of this situation but also the risk it created to CloudFlare's continued safe harbor protections.[3]
You seem to know an awful lot about this specific case, and I'll defer to you on that. I know about the general case, technically speaking (though merely a DNS hobbyist).
However, having a business relationship with another organization is not a right. Hate speakers are not a protected class.
DNS does not operate in the same manner nor with the same assumptions. One can obviously run their own DNS resolver as has been pointed out repeatedly in this thread.
Please list the, "pro-pedophilia and ISIS web sites." hosted by Cloudflare?
Edit: There's probably a business opportunity for a registrar/DNS provider/host that operates under 'free speech purism,' though it's hard to say it won't go the way of usenet in that regard.
The Galileo link works for me. It's worth pointing out Google at the very least censors as easily as Cloudflare [1].
My understanding of Cloudflare's policies though are with the exception of exceptionally objectionable content, Cloudflare only takes sites down in response to a court order. I don't know if it has been established that DNS is something which operators have a proactive obligation to censor, but I imagine it's the kind of thing Cloudflare would go to court over.
"I wish that they talked a bit more about their stance regarding censorship. They have a small paragraph talking about the problem, but they don't talk about the "solution"."
I think there's a good way to put this to the test - establish a DNS "mixer" that will randomly direct DNS requests to either 1.1.1.1 or 8.8.8.8 or (whatever) and let the public have access to it.
In this way, Cloudflare would bear some small expense from processing these DNS requests (essentially zero) but would receive no information about the initial requestor.
It would be interesting to run this experiment and perhaps see some real traffic on the DNS mixer ... and then see how cloudflare responds.
You might direct your questions at your ISP instead as it appears that someone may be intercepting your DNS requests.
----
To elaborate a bit, the differences in the (74.125.x.x) IP addresses being returned is somewhat normal and would usually be attributed to simple load balancing (as d33 pointed out). That is, 8.8.8.8 is actually a load balancer with several servers (including 74.125.46.8, 74.125.46.11, and 74.125.74.3) behind it.
The differences seen in the returned "edns0-client-subnet", however, are, well, "interesting".
As you've directed the requests to 8.8.8.8 directly (as opposed to your system's default resolver, whatever that is), the response returned for "edns0-client-subnet" should normally either be your own IP address or a supernet that includes it. (In my case, for example, the value is the static IP address (/32) of my own resolver.) When sending multiple requests such as you have, the "edns0-client-subnet" shouldn't really be changing from one request/response to the next; at the least, the values shouldn't change this much.
The fact that the responses are changing would seem to indicate that Google DNS servers are receiving the requests from different IP addresses when they should, in fact, all be coming from the same IP address (yours). These changes would lead me to suspect that someone (i.e., your ISP) is intercepting your DNS requests and "transparently proxying" them on your behalf.
If your ISP is using CGNAT (and issues you a private IP address) or something similar, that might explain it. Otherwise, I would be suspicious.
If you run those commands without the +short you will see that the TTL values for those responses are less than 59 (which for Google Public DNS, indicates they are cached, and explaining why the IP addresses shown are not yours).
The o-o.myaddr.l.google.com domain is a feature of Google's authoritative name servers (ns[14].google.com) and not of 8.8.8.8. You can send similar queries through 1.1.1.1 (where you will see that there is no EDNS Client Subnet data provided, improving the privacy of your DNS but potentially returning less accurate answers, as Google's authoritative servers do not have your IP subnet, but only the IP address of the CloudFlare resolver forwarding your query.
This is the Cloudflare resolver, right? What's the "privacy-first" part about? It's just another third party DNS host. They haven't changed the protocol to be uninspectable and AFAIK haven't made any guarantees about logging or whatnot that would enhance privacy vs. using whatever you are now. This just means you're trusting Cloudflare instead of Comcast or Google or whoever.
"We will never log your IP address (the way other companies identify you). And we’re not just saying that. We’ve retained KPMG to audit our systems annually to ensure that we're doing what we say."
Now, audits are generally not worth very much (even, perhaps even especially, from a Big Four group like KPMG), but for this type of thing (verifying that a company isn't doing something they promised they would not do) they're about the best we have.
Worth noting they have already edited the article (less than 2hours later) and taken out the "We will never log your IP" bit...
"We committed to never writing the querying IP addresses to disk and wiping all logs within 24 hours."
"While we need some logging to prevent abuse and debug issues, we couldn't imagine any situation where we'd need that information longer than 24 hours. And we wanted to put our money where our mouth was, so we committed to retaining KPMG, the well-respected auditing firm, to audit our code and practices annually and publish a public report confirming we're doing what we said we would."
It's not uncommon to retain logs like that for debugging purposes, abuse prevention purposes, etc, but then to go back later and wipe them or anonymize them.
Having dealt with KPMG recently (which I do at least once a year...), I would not expect to see the report.
KPMG's risk department - the lawyers' lawyers - appears to be violently allergic to their customers disclosing any report to outside parties. Based on my experience you can get a copy, but first you and the primary customer need to submit some paperwork. And among the conditions you need to agree with is that you don't redistribute the report or its contents.
Disclosure: I deal with security audits and technical aspects of compliance.
> KPMG's risk department - the lawyers' lawyers - appears to be violently allergic to their customers disclosing any report to outside parties.
Isn't that the entire point of such an audit? To be able to present it to outside third-parties?
For examples, Mozilla (CA/B) requires audits for root CAs. The CA must provide a link to the audit on the auditor's public web site -- forwarding a copy or hosting it on their own isn't sufficient.
You'd think, but it's surprisingly difficult to get the real full audit report. Mozilla's root policy _does_ require that they be shown the report, and has a bunch of extra requirements in there to ensure they're more detail, rather than some summary or overview document the auditors were persuaded to produce for this purpose. But the CA/B rules would allow just an audit letter which basically almost always says "Yes, we did an audit, and everything is fine" unless the auditors weren't comfortable writing "everything is fine". And almost always they feel that a footnote on a sub-paragraph buried in a detailed report is enough to leave "everything is fine" as the headline in the letter...
If you've ever been audited for some other reason, you'll know they find lots of things, and then you fix them, and that's "fine". But well, is it fine? Or, should we acknowledge that they found lots of things and what those things were, even if you subsequently fixed them? The CA/B says you have several months to hand over your letter after the audit period. Guess what those months are spent doing...
First of all, KPMG is the name of a group. All the Big Four are arranged as group companies, a single financial entity owns the name (e.g "KPMG", "EY") from some friendly place, (London in all but one case) and licenses out the right to operate a member company to professional services companies in various jurisdictions around the world. The group has the famous name, and sets some rules about training and compliance, but the employees will (almost all) work for the local member companies even though reporting for lay people will say the group name, as they do here.
Secondly, the idea in audit is not really about digging into the engineering. So although they will need people who have some idea what DNS is, they don't need experts - this isn't code review. The auditors tend to spend most of their time looking at paperwork and at policy - so e.g. we don't expect auditors to discover a Raspberry Pi configured for packet logging hidden in a patchbay, but we do expect them to find if "Delete logs every morning" is an ambition and it's not anybody's job to actually do that, nor is it anybody's job to check it got done.
I think it's somewhere in between, the article itself states:
"to audit our code and practices annually and publish a public report confirming we're doing what we said we would."
I run an investment fund (hedge fund) and we are completing our required annual audit (not by KPMG). It is quite thorough, they manually check balances in our bank accounts directly with the bank, they verify balances directly off blockchain (it's a crypto fund) and have us prove ownership of keys by signing messages, etc. And they do do a due diligence (lots of doodoo there) that we are not doing scammy things like the equivalent of having a raspberry pi attached to the network. Now this is extremely tough of course, and they are limited in what they can accomplish there, but the thought does cross their mind. All firms are different, but from what we've seen most auditors do decent good jobs most of the time. Their reputation can only be hit so many times before their name is no longer valuable to be an auditor.
Cloudflare is making a public pronouncement that they're not going to sell your DNS data nor track your IP address, with the implication that they will also not use the usage data to upsell you services. That's about the only additional "privacy" edge they offer.
In the same breath, they insinuate that Google both sells and uses DNS usage from their 8.8.8.8 and 8.8.4.4 resolvers.
They are NOT saying Google is lying and collecting the data. They are saying the business model of Google inherently provides such incentive.
Cloudflare is somewhat right: Means, Motive and Opportunity - but for a conviction you have to prove someone acted on the Opportunity. The Motive of Google is tampered with severe risk for loosing trust.
Cloudflare can make an argument they are fundamentally better positioned and that is all they do. As with all US based operations the NSA may cook up some convincing counterarguments and we may never know.
>"They are NOT saying Google is lying and collecting the data."
The OP did not say that cloudflare is "saying" that. The OP very clearly said they are "insinuating" it. And yes under the heading "DNS's Privacy Problem" the post mentions:
"With all the concern over the data that companies like Facebook and Google are collecting on you,..."
I think that juxtaposition of this statement under a bolded heading of "DNS's Privacy Problem" is very much insinuating that.
Bear in mind, Google's changed its mind before and can again at any time. For instance, when they bought DoubleClick they promised not to connect it with the Google account data they had. Then they changed that policy later.
Is the suggestion that a company whose main business is targeting ads based on collecting data about you might be collecting data about you an unfair insinuation?
Please follow the thread - the question of whether an insinuation if "fair" is not what's being discussed. What's being discussed is whether or not Cloudflare said or insinuated that there were privacy concerns with using 8.8.8.8.
> they insinuate that Google both sells and uses DNS
I don't think it's intended to say anything about Google specifically. Keep in mind that there are many other DNS services out there, and some of them are known for being pretty scummy, e.g. replacing NXDOMAIN results with "smart search" / ad pages.
"Privacy First: Guaranteed.
We will never sell your data or use it to target ads. Period.
We will never log your IP address (the way other companies identify you). And we’re not just saying that. We’ve retained KPMG to audit our systems annually to ensure that we're doing what we say.
Frankly, we don’t want to know what you do on the Internet—it’s none of our business—and we’ve taken the technical steps to ensure we can’t."
They want fast resolution of names that point to websites hosted by Cloudflare. Cloudflare makes their money selling their network to businesses that use it, and anything that makes that service better for the end-user increases customer stickiness.
Maybe not _as_ relevant, but still a considerable number of clients are configured to trust OpenDNS, and their far more ambiguous stance on what exactly this is for is appealing to some people. For example, OpenDNS says yes, absolutely it is their business what you're looking up, and maybe you are a Concerned Parent™ who wants to ensure their children don't access RedTube, so that feels like a good idea.
I was thinking more along the lines of their SME offering. DNS filtering is an important layer in network security and CloudFlare’s position of being in the middle of a large portion of Internet traffic, alongside now trying to attract a chunk of general DNS queries, potentially gives them a great deal of insight into who the bad actors are.
I think the whole point for such free services is to log that data and extract statistical meaning out of it - in this case, they pledge to use an anonymized format. On the other hand CloudFlare's mission (ensure secure, solid end to end connectivity) is much better aligned with the user's needs than Google's mission (sell more ads).
Google is one of the first ones using DNS over HTTPS.
BTW if you want to use DNS over HTTPS on Linux/Mac I strongly recommend dnscrypt proxy V2 (golang rewrite) https://github.com/jedisct1/dnscrypt-proxy and put e.g. cloudflare in their config toml file to make use of it.
Not really. Typically the query includes much more information (the site you want to visit) than the response (an IP potentially shared by thousands or millions of sites).
If it was easy, it would have been done during the TLS 1.3 process, but after a lot of discussion we're down to basically "Here is what people expect 'SNI encryption' would do for them, here's why all the obvious stuff can't achieve that, and here are some ugly, slow things that could work, now what?"
It is hard because of the TLS's pre-PFS legacy and to some extent also because of (very meaningful) intention to reduce roundtrips. The way to do SNI-like stuff is obvious: negotiate unauthenticated encrypted channel (by means of some EDH variant, you need one roundtrip for that) and perform any endpoint authentication steps inside that channel. This is what SSH2 does and AFAIK Microsoft's implementation of encrypted ISO-on-TCP (eg. rdesktop) does something similar.
Edit: in SSH2 the server authentication happens in the first cryptographic message from server (for the obvious efficiency reasons), and thus for doing SNI-style certificate selection there would have to be some plaintext server-ID in first clients message, but the security of the protocol does not require that as long as the in-tunnel authentication is mutual (it is for things like kerberos).
So, it feels like you're saying this is how SSH2 and rdesktop work, and then you caveat that by saying well, no, they actually don't offer this capability at all it turns out.
You are correct that you can do this if you spend one round trip first to set up the channel, and both the proposals for how we might encrypt SNI in that Draft do pay a round trip. Which is why I said they're slow and ugly. And as you noticed, SSH2 and rdesktop do not, in fact, spend an extra round trip to buy this capability they just go without.
This does not make sense. Either people are not concerned about hiding their traffic or if they are it follows they would be equally if not much more concerned about Google that can track them across devices and build far more indepth invasive profiles than the ISP.
Aside it's strange https everywhere has been pushed aggressively by many here under the bogeyman of ISP adware and spying while completely ignoring the much larger adware and privacy threats posed by the stalking of Google, Facebook and others. It is disingenuous and insincere.
I can only really discuss the UK, since that's the only place where I've bought home ISP service.
Only a handful of small specialist firms actually just move bits in the UK. Every single UK ISP big enough to advertise on television is signed up to filter traffic and block things for being "illegal" or maybe if Hollywood doesn't like them, or if they have "naughty" words mentioned, or just because somebody slipped. If you're thinking "Not mine" and it runs TV adverts then, oops, nope, you're wrong about that and have had your Internet censored without realising it. I wonder how ISPs got their bad reputation...
Did you read the page? They're supporting DNS over TLS and DNS over HTTPS - both changes to the protocol to make in uninspectable. They've also said they're not logging IP info and they're getting independent auditors in to confirm what they're saying. Sounds trustworthy to me
Both encrypted extensions are of course inspectable at the end-point, which is the privacy model being discussed.
What is intriguing to me is why Cloudflare are offering this. Perhaps it is to provide data on traffic that is 'invisible' to them, as in it doesn't currently touch their networks. Possibly as a sales-lead generator.
Or is the plan to become dominant and then use DNS blackholing to shutdown malware that is a threat to their systems?
The goal is to make the sites that use Cloudflare ridiculously fast by putting the authoritative and recursive DNS on the same machine (for clients who use 1.1.1.1).
Cloudflare is already a significant enough player in handling Internet traffic. Maybe the company does want to do good for the sake of doing good, but I’m wary of companies taking over in this manner and making the Internet more like a monolith than a distributed system.
It seems like bait-and-switch though? They tell about DNS over https and dns without logging, and then direct to an installation instruction where you can learn to start to use, "DNS without logging", but nothing that's encrypted? What am I missing?