Where are you testing from? I'm going to guess: a datacenter. Residential customers won't see anything this fast. I'm in a small town in Kansas, connected by 1 Gbit ATT fiber. I'm getting ~26ms to 1.1.1.1 and ~19ms to my private DNS resolver that I host in a datacenter in Dallas. Google DNS comes in around 19ms.
I suspect that Cloudflare and Google DNS both have POPs in Dallas, which accounts for the similar numbers to my private resolver. My point is, low latencies to datacenter-located resolver clients is great but the advantage is reduced when consumer internet users have to go across their ISP's long private fiber hauls to get to a POP. Once you're at the exchange point, it doesn't really matter which provider you choose. Go with the one with the least censorship, best security, and most privacy. For me, that's the one I run myself.
Side note: I wish AT&T was better about peering outside of their major transit POPs and better about building smaller POPs in regional hubs. For me, that would be Kansas City. Tons of big ISPs and content providers peer in KC but AT&T skips them all and appears to backhaul all Kansas traffic to DFW before doing any peering.
64 bytes from 1.1.1.1: icmp_seq=0 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=128 time=9 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=128 time=2 ms
Google:
64 bytes from 8.8.8.8: icmp_seq=0 ttl=54 time=12 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=11 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=13 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=45 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=54 time=14 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=54 time=11 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=54 time=34 ms
Quad9:
64 bytes from 9.9.9.9: icmp_seq=0 ttl=53 time=10 ms
64 bytes from 9.9.9.9: icmp_seq=1 ttl=53 time=69 ms
64 bytes from 9.9.9.9: icmp_seq=2 ttl=53 time=14 ms
64 bytes from 9.9.9.9: icmp_seq=3 ttl=53 time=58 ms
64 bytes from 9.9.9.9: icmp_seq=4 ttl=53 time=52 ms
One thing I noticed is that when I first pinged 1.1.1.1 I got 14ms, which then quickly dropped to ~3ms consistently:
64 bytes from 1.1.1.1: icmp_seq=0 ttl=128 time=14 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=128 time=14 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=128 time=3 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=128 time=1 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=128 time=4 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=52 time=241.529 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=52 time=318.034 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=52 time=337.291 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=52 time=255.748 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=52 time=247.765 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=52 time=235.611 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=52 time=239.427 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=52 time=247.911 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=52 time=260.911 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=52 time=281.153 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=52 time=300.363 ms
64 bytes from 1.1.1.1: icmp_seq=11 ttl=52 time=234.296 ms
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
Request timeout for icmp_seq 5
Request timeout for icmp_seq 6
Request timeout for icmp_seq 7
Request timeout for icmp_seq 8
Request timeout for icmp_seq 9
Request timeout for icmp_seq 10
$ ping 1.0.0.1
PING 1.0.0.1 (1.0.0.1): 56 data bytes
64 bytes from 1.0.0.1: icmp_seq=0 ttl=50 time=167.359 ms
64 bytes from 1.0.0.1: icmp_seq=1 ttl=50 time=165.791 ms
64 bytes from 1.0.0.1: icmp_seq=2 ttl=50 time=165.846 ms
64 bytes from 1.0.0.1: icmp_seq=3 ttl=50 time=166.755 ms
64 bytes from 1.0.0.1: icmp_seq=4 ttl=50 time=166.694 ms
64 bytes from 1.0.0.1: icmp_seq=5 ttl=50 time=166.088 ms
64 bytes from 1.0.0.1: icmp_seq=6 ttl=50 time=166.460 ms
64 bytes from 1.0.0.1: icmp_seq=7 ttl=50 time=166.668 ms
64 bytes from 1.0.0.1: icmp_seq=8 ttl=50 time=166.753 ms
64 bytes from 1.0.0.1: icmp_seq=9 ttl=50 time=165.670 ms
64 bytes from 1.0.0.1: icmp_seq=10 ttl=50 time=166.816 ms
64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=17.580 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=18.025 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=17.780 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=18.231 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=17.906 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=18.447 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=59 time=22.806 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=23.321 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=24.379 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=25.869 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=24.485 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=24.165 ms
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=57 time=23.005 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=22.867 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=24.461 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=23.680 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=35.581 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=21.033 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=57 time=41.634 ms
ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=1.36 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=1.32 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=1.34 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=1.38 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=58 time=1.37 ms
ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=1.33 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=1.38 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=1.35 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=1.36 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=1.35 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=5.044 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=6.447 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=60 time=6.371 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=60 time=6.308 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=60 time=7.317 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=60 time=5.989 ms
Dubai:
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=48.728 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=48.450 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=47.266 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=45.320 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=46.470 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=55 time=14.053 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=12.715 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=55 time=13.615 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=55 time=14.018 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=55 time=12.261 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=55 time=11.428 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=55 time=11.950 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=55 time=13.034 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=55 time=13.679 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=55 time=12.415 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=55 time=12.088 ms
Pinging 1.1.1.1 with 32 bytes of data:
Reply from 89.228.6.1: Destination net unreachable.
Reply from 89.228.6.1: Destination net unreachable.
Reply from 89.228.6.1: Destination net unreachable.
Reply from 89.228.6.1: Destination net unreachable.
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=61 time=15.860 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=61 time=15.799 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=61 time=15.616 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=61 time=15.769 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=61 time=15.431 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=61 time=16.459 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=61 time=15.860 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=61 time=15.930 ms
Tokyo, domestic 2Gbps FO but connected through Wifi:
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=5.531 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=4.420 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=5.450 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=5.438 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=4.231 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=5.933 ms
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=57 time=6.440 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=4.574 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=4.684 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=4.992 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=5.942 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=5.955 ms
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=58 time=111.781 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=102.982 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=102.206 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=110.135 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=110.085 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=58 time=6.886 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=5.475 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=58 time=5.674 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=58 time=5.557 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=58 time=7.066 ms
$ ping 9.9.9.9
PING 9.9.9.9 (9.9.9.9): 56 data bytes
64 bytes from 9.9.9.9: icmp_seq=0 ttl=58 time=5.880 ms
64 bytes from 9.9.9.9: icmp_seq=1 ttl=58 time=5.534 ms
64 bytes from 9.9.9.9: icmp_seq=2 ttl=58 time=5.251 ms
64 bytes from 9.9.9.9: icmp_seq=3 ttl=58 time=5.194 ms
64 bytes from 9.9.9.9: icmp_seq=4 ttl=58 time=5.698 ms
Something interesting I saw pointed out on the reddit thread about this is the ttl between 1.1.1.1 and 8.8.8.8 is the ttl is way different.
Your pings also have the same thing showing up 128 vs 53. I tried on my laptop and get something simmilar. traceroute to 1.1.1.1 is 1 hop which is wrong. 1.0.0.1 shows a few hops.
Unless you tell it not to, ping will try a reverse lookup on the IP you are pinging in order to display that to you in the output. It's a good idea to keep that in mind when you ping something, especially if you notice the first ping is abnormally slow.
Perhaps that depends on operating system. In the 30 years I have been using ping on Linux, the reverse lookup time is absolutely included in the first ping time.
I think AT&T's fiber modems are using 1.1.1.1. I'm getting < 1ms ping times and according to Cloudflare's website there's no data center close enough to me for that to be possible without violating the speed of light.
what happens if you go to https://1.1.1.1 in a browser? It should have a valid TLS cert and have a big banner that says, among other things, "Introducing 1.1.1.1". If your ISP's CPE or anything else is fucking with traffic to that IP, it wont load/display that
Call your ISP and ask them why they're blocking access to some websites. Ask them if there are any other websites they're blocking. Tweet about it. Etc
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=10.8 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=11.3 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=10.7 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=10.9 ms
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=11.3 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=60 time=11.1 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=60 time=10.5 ms
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=7.65 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=8.53 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=10.2 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=8.04 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=7.92 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=59 time=7.85 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=59 time=7.88 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=59 time=7.73 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=59 time=7.73 ms
BigPipe, Spark, Skinny and Vodafone don't believe in peering and thus don't peer with Cloudflare at APE. If you wanted the best performance then 2degrees, Orcon, Voyager or Slingshot are the best for this since they peer.
iMac ~ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=64 time=0.688 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=0.814 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=1.153 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=64 time=0.752 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=64 time=0.755 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=64 time=0.789 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=64 time=0.876 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=64 time=0.869 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=64 time=0.830 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=64 time=1.387 ms
--- 1.1.1.1 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.688/0.891/1.387/0.204 ms
Pinging 8.8.8.8 averages 8ms. CloudFlare must have a POP here in Nashville?
That's probably because AT&T is using 1.1.1.1 for something internal and breaking the public internet for it's users: you get a really fast ping on 1.1.1.1, but it's not the 1.1.1.1 you are trying to reach.
That's impressive. My AT&T wifi router caps bandwidth at 300mb/s (instead of 1gbs on ethernet) and add 10-20 ms to latency. And this is standing next to it and using 5ghz.
I'm guessing Google's resolvers are a little busier than Cloudflare's right now, because pretty much nobody not on HN right now is hitting them. Will be a more interesting comparison in 6 months.
I'd be surprised if increased load has a negative effect on 1.1.1.1's performance.
We run a homogeneous architecture -- that is, every machine in our fleet is capable of handling every type of request. The same machines that currently handle 10% of all HTTP requests on the internet, and handle authoritative DNS for our customers, and serve the DNS F root server, are now handling recursive DNS at 1.1.1.1. These machines are not sitting idle. Moreover, this means that all of these services are drawing from the same pool of resources, which is, obviously, enormous. This service will scale easily to any plausible level of demand.
In fact, in this kind of architecture, a little-used service is actually likely to be penalized in terms of performance because it's spread so thin that it loses cache efficiency (for all kinds of caches -- CPU cache, DNS cache, etc.). More load should actually make it faster, as long as there is capacity, and there is a lot of capacity.
Meanwhile, Cloudflare is rapidly adding new locations -- 31 new locations in March alone, bringing the current total to 151. This not only adds capacity for running the service, but reduces the distance to the closest service location.
In the past I worked at Google. I don't know specifically how their DNS resolver works, but my guess is that it is backed by a small set of dedicated containers scheduled via Borg, since that's how Google does things. To be fair, they have way too many services to run them all on every machine. That said, they're pretty good at scheduling more instances as needed to cover load, so they should be fine too.
In all likelihood, what really makes the difference is the design of the storage layer. But I don't know the storage layer details for either Google's or Cloudflare's resolvers so I won't speculate on that.
> In fact, in this kind of architecture, a little-used service is actually likely to be penalized in terms of performance because it's spread so thin that it loses cache efficiency
This is exactly what I'm seeing with the small amount of testing I'm doing against google to compare vs cloudflare.
Sometimes google will respond in 30ms (cache hit), more often than not it has to do at least a partial lookup (160ms), and sometimes even go further to (400ms.)
The worst I'm encountering on 1.1.1.1 is around 200ms for a cache miss.
Basically, what it looks like is that google is load balancing my queries and I'm getting poor performance because of it - I'm guessing they simply need to kill some of their capacity to see increased cache hits.
Anecdotally I'm at least seeing better performance out of 1.1.1.1 than my ISP's (internode) which has consistently done better than 8.8.8.8 in the past.
Also anecdotally, my short 1-2 month trial of using systemd-resolved is now coming to a failed conclusion, I suspect I'll be going back to my pdnsd setup because it just works better.
ICMP round-trip times don't necessarily prove anything - you need to be examing DNS resolution times.
Lots of network hardware (i.e., routers, firewalls if they're not outright blocking) de-prioritise ICMP (and other types of network control/testing traffic) and the likelihood is that Google (and other free DNS providers) are throttling the number of ICMP replies that they send.
They're not providing an ICMP reply service, they're providing a DNS service. I'd a situation during the week where I'd to tell one of our engineers to stop tracking 8.8.8.8 as an indicator of network availability for this reason.
Note, from Google Compute Engine use 8.8.8.8 as it should always be faster. I'm guessing the 8.8.8.8 service exists in every Google Cloud region. Even better use the default GCE autogenered DNS IP that they configure in /etc/resolv.conf to get instance name resolving magic.
Usually best to use 169.254.169.254, which is the magic "cloud metadata address" that talks directly to the local hypervisor (I think?). That will recurse to public DNS as necessary. https://cloud.google.com/compute/docs/internal-dns
I agree that's usually best, but one exception is worth noting: if you want only publicly resolvable results, don't use 169.254.169.254. That address adds convenient predictable hostnames for your project's instances under the .internal TLD.
Also, no need to hardcode that address - DHCP will happily serve it up. It also has the hostname metadata.google.internal and the (disfavored for security reasons) bare short hostname metadata.
The "backup" IPv4 address is 1.0.0.1 rather than, say, 1.1.1.2, and why they needed APNIC's help to make this work
In theory you can tell other network providers "Hi, we want you to route this single special address 1.1.1.1 to us" and that would work. But in practice most of them have a rule which says "The smallest routes we care about are a /24" and 1.1.1.1 on its own is a /32. So what gets done about that is you need to route the entire /24 to make this work, and although you can put other services in that /24 if you _really_ want, they will all get routed together, including failover routing and other practices. So, it's usually best to "waste" an entire /24 on a single anycast service. Anycast is not exactly a cheap homebrew thing, so a /24 isn't _that_ much to use up.
I'm in a city in southern Japan (so most of my traffic needs to go to Tokyo first), on a gigabit fiber connection.
--- 1.1.1.1 ping statistics ---
rtt min/avg/max/mdev = 30.507/32.155/36.020/1.419 ms
--- 8.8.8.8 ping statistics ---
rtt min/avg/max/mdev = 19.618/21.572/23.009/0.991 ms
The traceroutes are inconclusive but they kind of look like Google has a POP in Fukuoka and CloudFlare are only in Tokyo.
edit: Namebench was broken for me, but running GRC's DNS Benchmark my ISP's own resolver is the fastest, then comes Google 8.8.8.8, then Level3 4.2.2.[123], then OpenDNS, then NTT, and then finally 1.1.1.1.
Pinging 1.1.1.1 with 32 bytes of data:
Reply from 1.1.1.1: bytes=32 time=45ms TTL=53
Reply from 1.1.1.1: bytes=32 time=45ms TTL=53
Reply from 1.1.1.1: bytes=32 time=45ms TTL=53
Reply from 1.1.1.1: bytes=32 time=45ms TTL=53
Ping statistics for 1.1.1.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 45ms, Maximum = 45ms, Average = 45ms
Pinging 1.0.0.1 with 32 bytes of data:
Reply from 1.0.0.1: bytes=32 time=46ms TTL=54
Reply from 1.0.0.1: bytes=32 time=46ms TTL=54
Reply from 1.0.0.1: bytes=32 time=46ms TTL=54
Reply from 1.0.0.1: bytes=32 time=46ms TTL=54
Ping statistics for 1.0.0.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 46ms, Maximum = 46ms, Average = 46ms
Pinging 8.8.4.4 with 32 bytes of data:
Reply from 8.8.4.4: bytes=32 time=29ms TTL=56
Reply from 8.8.4.4: bytes=32 time=29ms TTL=56
Reply from 8.8.4.4: bytes=32 time=29ms TTL=56
Reply from 8.8.4.4: bytes=32 time=29ms TTL=56
Ping statistics for 8.8.4.4:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 29ms, Maximum = 29ms, Average = 29ms
Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=21ms TTL=56
Reply from 8.8.8.8: bytes=32 time=21ms TTL=56
Reply from 8.8.8.8: bytes=32 time=21ms TTL=56
Reply from 8.8.8.8: bytes=32 time=21ms TTL=56
Ping statistics for 8.8.8.8:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 21ms, Maximum = 21ms, Average = 21ms
Pinging 208.67.220.220 with 32 bytes of data:
Reply from 208.67.220.220: bytes=32 time=45ms TTL=54
Reply from 208.67.220.220: bytes=32 time=46ms TTL=54
Reply from 208.67.220.220: bytes=32 time=45ms TTL=54
Reply from 208.67.220.220: bytes=32 time=50ms TTL=54
Ping statistics for 208.67.220.220:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 45ms, Maximum = 50ms, Average = 46ms
Pinging 208.67.222.222 with 32 bytes of data:
Reply from 208.67.222.222: bytes=32 time=61ms TTL=54
Reply from 208.67.222.222: bytes=32 time=61ms TTL=54
Reply from 208.67.222.222: bytes=32 time=61ms TTL=54
Reply from 208.67.222.222: bytes=32 time=61ms TTL=54
Ping statistics for 208.67.222.222:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 61ms, Maximum = 61ms, Average = 61ms
I would be interested to hear from google (8.8.8.8) how much ping traffic that address gets ...
I know that I will quickly ping 8.8.8.8 as a very quick and dirty test of network up ... its just faster to type than any other address I could test with.
It looks like you are testing either from centers where cloudflare has servers or exchanging traffic with, which is likely true in a data center given the traffic it transports. What most users want is the ping time from home/office.
DNS-over-HTTPS doesn’t make as much sense to me as DNS-over-TLS. They are effectively the same thing, but HTTPS has the added overhead of the HTTP headers per request. If you look at the currently in progress RFC, https://tools.ietf.org/html/draft-ietf-doh-dns-over-https-04, this is quite literally the only difference. The DNS request is encoded as a standard serialized DNS packet.
The article mentions QUIC as being something that might make HTTPS faster than standard TLS. I guess over time DNS servers can start encoding HTTPS requests into JSON, like google’s impl, though there is no spec that I’ve seen yet that actually defines that format.
Can someone explain what the excitement around DNS-over-HTTPS is all about, and why DNS-over-TLS isn’t enough?
EDIT: I should mention that I started implementing this in trust-dns, but after reading the spec became less enthusiastic about it and more interested in finalizing my DNS-over-TLS support in the trust-dns-resolver. The client and server already support TLS, I couldn't bring myself to raise the priority enough to actually complete the HTTPS impl (granted it's not a lot of work, but still, the tests etc, take time).
Some ISPs block outbound DNS from customers to anywhere but their resolvers, filtering based on target port.
This is a particularly common trick in countries that attempt to censor the internet.
It's a lot harder to do that with DNS-over-HTTPS because it looks like normal traffic.
That said, in this case ISPs can just null route the IP address of the obvious main resolvers such as 1.1.1.1. I imagine most of the benefit is surely to people who can spin up their own resolvers.
When we add TLS on top of the protocol, ISPs can only filter based on port at that point. We can run DNS on 443 if that helps, but as you said, static well-known IPs can then be blocked.
> I imagine most of the benefit is surely to people who can spin up their own resolvers.
There are already many easily run DNS resolvers available. Is there a benefit you see in operating them over HTTPS that improves on that?
This is really the elephant in the room. For all we know, ISP bad-actors have never cared about DNS for data-collection purposes, and they're already using SNI to gather data to sell to marketers. I think it's absolutely crucial to find a way to (at least optionally) send SNI encrypted to the server.
The client that sends SNI is, AFAIK, the browser or a similar piece of software. Some older browsers don't support SNI so they can only access single-vhost-per-ip over https.
This means you'll have a really hard time trying to get rid of SNI system-wide, what with a lot of minor apps making their own https connections (granted, on Android or iOS they probably use a common API, but not on a computer).
Well with SNI the concern isn't DNS. Any TLS connection that supports SNI (basically everything that isn't ancient) would have to be fixed. Also, ANI is a pretty useful thing to have and getting rid of it doesn't exactly fix much. Without SNI the server only has the destination IP address to determine which site and thus which certificate to send to the client. Having https sites with multiple certificates hosted on one IP address only works because of SNI. You would break a large portion of the web by disabling it. Also, even if you do disable SNI, the server still sends back the certificate with the domain names in it. And even if you ignore all of that, there's still reverse DNS which will probably be accurate if they send mail from that server and you can always do a DNS lookup for every domain name there is to get a map of which domains point to a given IP. Due to DNS based geolocation that won't work for every site but the sites using that are going to be big enough to find their IP address ranges via another method.
In short, there's really no good solution here but an amendment to TLS could conceivably make it to where it wouldn't be possible to narrow it down to which site that an IP address hosts the user was visiting. That could actually be good enough for traffic to e.g. cloudflare.
Non-rooted android, you have to set a static IP for every network and then there will be an option to enter DNS names. They default to Google DNS
Static IP settings are under advanced.
I suppose there is also domain fronting [1], but it won't be fast or an easy-to-remember IP address anymore. And if you need that, you might need a VPN anyway?
Yes! And I plan to actually build in a default setup to use that now that it exists. I should have mentioned up front that this is the most exciting thing to me in the announcement.
This is a very exciting development, thank you for posting this.
I've implemented DNS before. Doing this saves an entire 300 lines of code. At the same time, it makes the DNS server much more complicated. On top of that, implementing a compliant posix libc will now either use a completely different code path, or pull in a huge amount of code to implement HTTP, HTTP/2, and QUIC. If the simpler, cleaner, and more performant route is taken, it willgbreak when someone screws up "legacy" dns without noticing, because it works in the browser.
It's not worth the complexity of multiple protocols that do the same thing. And it's not worth making the base system insanely complicated so that the magic 4 letters 'http' can show up.
TLS? Yeah, since the simpler secure DNSes failed, we might as well use that. But let's try to keep http complexity contained.
It seems like crappy networks are the norm nowadays, and the preference of the ISPs is to offer the web only. You need a middle box just to access the internet at-large (e.g Tor). Masquerading traffic as web traffic appears to be a good tactic, though inefficient/sloppy.
Yeah, but once everything is tunneled over HTTP it will finally fix the network operator problem once and for all since you can't filter applications using ports.
There are a couple of different approaches. One is DNS-over-TLS. That takes the existing DNS protocol and adds transport layer encryption. Another is DNS-over-HTTPS. It includes security but also all the modern enhancements like supporting other transport layers (e.g., QUIC) and new technologies like server HTTP/2 Server Push. Both DNS-over-TLS and DNS-over-HTTPS are open standards. And, at launch, we've ensured 1.1.1.1 supports both.
We think DNS-over-HTTPS is particularly promising — fast, easier to parse, and encrypted.
Dns over https would be harder for governments and other middleman to block or intercept, despite it being less efficient. It would look like any other https request. Especially if browsers agreed to universally support it.
tls isn't magic, you can still observe the encrypted stream and make assumptions based on bytes sent/received on the wire, protocol patterns and timing. See the crime and breach attack.
Perhaps if the attacker filters traffic first by protocol, it's harder but not at all impossible. I'd guess that DNS-over-HTTPS packets won't be hard to identify by other means.
Thank you for responding, Patrick. As one of the authors of the RFC, your views on this are a great contribution to the conversation.
> rfc 8336
I'll have to read up on this, thanks for the link.
> h2 coalescing
DNS is already capable of using TCP/TLS (and by it's nature UDP) for multiple DNS requests at a time. Is there some additional benefit we get here?
> h2 push
This one is interesting, but DNS already has optimizations built in for things like CNAME and SRV record lookups, where the IP is implicitly resolved when available and sent back with the original request. Is this adding something additional to those optimizations?
> caching
DNS has caching built-in, TTLs on each record. Is there something this is providing over that innate caching built into the protocol?
> it starts to add up to a very interesting story.
I'd love to read about that story, if someone has written something, do you have a link?
Also, a question that occurred to me, are we talking about the actual website you're connecting to being capable of preemptively passing DNS resolution to web clients over the same connection?
this story will evolve as the http ecosystem evolves - but that's part of the point.
wrt coalescing/origin/secondary-certificates its a powerful notion to consider your recursive resolver's ability to serve other http traffic on the same connection. That has implications for anti-censorship and traffic analysis.
Additionally the ability to push DNS information that it anticipates you will need outside the real time moment of an additional record has some interesting properties.
DoH right now is limited to the recursive resolver case. But it does lay the groundwork for other http servers being able to publish some DNS information - that's something that needs some deep security based thinking before it can be allowed, but this is a step towards being compatible with that design.
wrt caching - some apps might want a custom dns cache (as firefox does), but some may simply use an existing http cache for that purpose without having to invent a dns cache. leveraging code is good. There are lots of other little things like that which http brings for free - media type negotiation, proxying, authentication, etc..
> There are lots of other little things like that which http brings for free - media type negotiation, proxying, authentication, etc..
Reading a little between the lines here, would you say that at some point we effectively replace the existing DNS resolution graph with something implemented entirely over http? Where features like forwarding and proxying would have more common off the shelf tooling?
I can start see a picture here that looks to be more about common/shared code, and less about actual features of the underlying protocols.
As a complete layperson, h2 push might be interesting because a DNS resolver could learn to detect patterns in DNS queries (e.g. someone who requests twitter.com usually requests pbs.twimg.com and abs.twimg.com right after) and start to push those automatically when they get the query for twitter.com.
TIL you can also use 1.1 and it will expand to 1.0.0.1
$> ping 1.1
PING 1.1 (1.0.0.1) 56(84) bytes of data.
64 bytes from 1.0.0.1: icmp_seq=1 ttl=55 time=28.3 ms
64 bytes from 1.0.0.1: icmp_seq=2 ttl=55 time=33.0 ms
64 bytes from 1.0.0.1: icmp_seq=3 ttl=55 time=43.6 ms
64 bytes from 1.0.0.1: icmp_seq=4 ttl=55 time=41.7 ms
64 bytes from 1.0.0.1: icmp_seq=5 ttl=55 time=56.5 ms
64 bytes from 1.0.0.1: icmp_seq=6 ttl=55 time=38.4 ms
64 bytes from 1.0.0.1: icmp_seq=7 ttl=55 time=34.8 ms
64 bytes from 1.0.0.1: icmp_seq=8 ttl=55 time=45.7 ms
64 bytes from 1.0.0.1: icmp_seq=9 ttl=55 time=45.2 ms
64 bytes from 1.0.0.1: icmp_seq=10 ttl=55 time=43.1 ms
I don’t actually think it’s in a spec formally but is in a common c lib[0].
> a.b
> Part a specifies the first byte of the binary address. Part b is interpreted as a 24-bit value that defines the rightmost three bytes of the binary address. This notation is suitable for specifying (outmoded) Class C network addresses.
It's not fully true that 127.0.0.1 is the same as 0.0.0.0. For example, binding a webserver to 0.0.0.0 make it on the public network while 127.0.0.1 is strictly localhost.
What I was trying to say is - On Linux, INADDR_ANY (0.0.0.0) supplied to connect() or sendto() calls is treated as a synonym for INADDR_LOOPBACK (127.0.0.1) address.
It's one more letter than a suffix, but as a prefix its a bit clearer. I've known companies to post LAN hostname addresses that way, and in written/printed materials it stands out pretty clearly as an address to type.
It follows the URL standards (no schema implies current or default schema). Many auto-linking tools (such as a Markdown, Word) recognize it by default (though sometimes results are unpredictable given schema assumptions). It's also increasingly the recommendation for HTML resources where you do want to help insure same-schema requests (good example cross-server/CDN CSS and JS links now are typically written as //css-host.example.com/some/css/file.css).
I wish that they talked a bit more about their stance regarding censorship. They have a small paragraph talking about the problem, but they don't talk about the "solution".
While Cloudflare has been pretty neutral about censoring sites in the past (notably, pirate sites), the Daily Stormer incident put them in a though spot[1].
They talk a bit about Project Galileo (the link is broken BTW, it should be https://www.cloudflare.com/galileo), but their examples do not mention topics that would be controversial in western societies, and the site is quite vague.
Would they also protect sites like sci-hub, for example?
While I would rather use a DNS not owned by Google, I have never seen any site blocked by them, including sites with a nation-wide block. I hope that Cloudflare is able to do the same thing.
There's a pretty big difference between terminating a business relationship (which is what Cloudflare did to Daily Stormer, and which Google also did a couple days before Cloudflare did) and refusing to answer DNS queries for third-party domains with which there is no business relationship. It's hard to imagine how the former could be used as precedent to compel the latter.
Cloudflare has no interest in censorship -- the whole reason the Daily Stormer thing was such a big deal was because it's the only time Cloudflare has ever terminated a customer for objectionable content. Be sure to read the blog post to understand: https://blog.cloudflare.com/why-we-terminated-daily-stormer/
(Disclosure: I work for Cloudflare but I'm not in a position to set policy.)
I probably should have made a clearer point instead of linking to TorrentFreak.
I did not mean that I was worried that CloudFlare's DNS would start blocking sites whose content they disagree with (although that would also be worrisome).
I'm worried that copyright holders might be able to use the Daily Stormer case as a precedent to force CloudFlare to stop offering services to infringing sites.
If they are able to do that, I can also see them attempting to force CloudFlare to remove DNS entries as well.
Right, as I said, it's hard for me to see how one could be used as precedent for the other given how different the situations are. And if you could use it, you could just as easily do the same against Google DNS.
Bear in mind, they dropped Daily Stormer because they were claiming Cloudflare agreed with their ideology. Which someone in the previous discussion pointed out was a Terms of Service violation.
DNS resolving offers no such terms and no such reason to make such a claim. I don't see that playing here. And bear in mind, when the CEO did it, he wrote about how dangerous it was that companies had that power. I don't feel other companies running other DNS services hold that level of concern or awareness.
When you consider that their "competitor" in the space of free DNS resolvers with easy-to-remember IPs is Google, who recently tried blocking the word "gun" in Google Shopping... it's hard not to see the introduction of a Cloudflare DNS resolver as at least a net positive for resisting censorship. And more options is almost always better.
Cloudflare is a private company and they're free to do what they want but their reasoning for the Daily Stormer termination felt like a convenient excuse to me. I'm sure that it was the best business decision for them but when I read a blog post touting 1.1.1.1 as being anti-censorship, I roll my eyes.
Anti-censorship so long as Matthew Prince doesn't have a bad morning.
I run my own DNS-over-TLS resolver at a trusted hosting provider. It upstreams to a selection of roots for which I have reasonable trust. My resolver does DNS-over-TLS, DNS-over-HTTPS, and plain DNS. Multiple listening ports for the secure stuff so that I have something that works for most circumstances.
I would still take someone who can have a bad morning and decide to censor one site (and then write about how concerning that power is), over entities that regularly view it as their "responsibility" to shut down sites and remove content they find objectionable.
I think it's great if people are running their own DNS. :) But I'm certainly not mad that Cloudflare's offering yet another public alternative. As I said, more choices is better.
Running your own root content DNS server isn't particularly hard, note. The public root content DNS server operators are not interested in serving up dummy answers for all sorts of internal stuff that leaks out to the root content DNS servers any more than you are interested in sending it to them. (-:
My tendency would be to ask for some sort of proof, though I realize asking for proof of nonexistence of evidence is near impossible. I'm inclined at present to place more trust in Cloudflare's word at this point, but I try to keep an open mind. It's always good to know both sides' stories.
Well, you have the CloudFlare blog where Prince states "The tipping point for us making this decision was that the team behind Daily Stormer made the claim that we were secretly supporters of their ideology."[0] So, all that is necessary is to find this statement. I won't link to it but the Daily Stormer has been active on the clear web for most of the time intervening the seizure of their domain and now. Prince never provided any proof for his claim, not even a screenshot. Of course, a screenshot would have given away, via the visual context, that the statement wasn't from the "team" but from a forum commenter presenting the notion in a joking manner.
As it happens, an internal memo "leaked" to the media wherein Prince admitted he pulled the plug on The Daily Stormer because they are "assholes" and admitted that “The Daily Stormer site was bragging on their bulletin boards about how Cloudflare was one of them."[1] These forums are also what served as the area for readers to comment on articles. Ergo, he acknowledged that he knew his statement about the Daily Stormer "team" claiming CloudFlare supported their ideology was a lie.
You also have to go back in time and consider the context in which The Daily Stormer was successively de-platformed. The site had been publishing low-brow racist commentary including jokes about pushing Jews into ovens and referring to Africans as various simian species for years. It was, however, a single article wherein they mocked the woman who died at the Charlottesville, VA conflict between the alt-right and antifa that led to the widespread outrage that resulted in the The Daily Stormer being temporarily kicked off the internet.[2]
At the same time that Cloudflare was banning the Daily Stormer, they were (and still are, AFAIK) providing services to pro-pedophilia and ISIS web sites. The Daily Stormer itself pointed out not only the hypocrisy of this situation but also the risk it created to CloudFlare's continued safe harbor protections.[3]
You seem to know an awful lot about this specific case, and I'll defer to you on that. I know about the general case, technically speaking (though merely a DNS hobbyist).
However, having a business relationship with another organization is not a right. Hate speakers are not a protected class.
DNS does not operate in the same manner nor with the same assumptions. One can obviously run their own DNS resolver as has been pointed out repeatedly in this thread.
Please list the, "pro-pedophilia and ISIS web sites." hosted by Cloudflare?
Edit: There's probably a business opportunity for a registrar/DNS provider/host that operates under 'free speech purism,' though it's hard to say it won't go the way of usenet in that regard.
The Galileo link works for me. It's worth pointing out Google at the very least censors as easily as Cloudflare [1].
My understanding of Cloudflare's policies though are with the exception of exceptionally objectionable content, Cloudflare only takes sites down in response to a court order. I don't know if it has been established that DNS is something which operators have a proactive obligation to censor, but I imagine it's the kind of thing Cloudflare would go to court over.
"I wish that they talked a bit more about their stance regarding censorship. They have a small paragraph talking about the problem, but they don't talk about the "solution"."
I think there's a good way to put this to the test - establish a DNS "mixer" that will randomly direct DNS requests to either 1.1.1.1 or 8.8.8.8 or (whatever) and let the public have access to it.
In this way, Cloudflare would bear some small expense from processing these DNS requests (essentially zero) but would receive no information about the initial requestor.
It would be interesting to run this experiment and perhaps see some real traffic on the DNS mixer ... and then see how cloudflare responds.
You might direct your questions at your ISP instead as it appears that someone may be intercepting your DNS requests.
----
To elaborate a bit, the differences in the (74.125.x.x) IP addresses being returned is somewhat normal and would usually be attributed to simple load balancing (as d33 pointed out). That is, 8.8.8.8 is actually a load balancer with several servers (including 74.125.46.8, 74.125.46.11, and 74.125.74.3) behind it.
The differences seen in the returned "edns0-client-subnet", however, are, well, "interesting".
As you've directed the requests to 8.8.8.8 directly (as opposed to your system's default resolver, whatever that is), the response returned for "edns0-client-subnet" should normally either be your own IP address or a supernet that includes it. (In my case, for example, the value is the static IP address (/32) of my own resolver.) When sending multiple requests such as you have, the "edns0-client-subnet" shouldn't really be changing from one request/response to the next; at the least, the values shouldn't change this much.
The fact that the responses are changing would seem to indicate that Google DNS servers are receiving the requests from different IP addresses when they should, in fact, all be coming from the same IP address (yours). These changes would lead me to suspect that someone (i.e., your ISP) is intercepting your DNS requests and "transparently proxying" them on your behalf.
If your ISP is using CGNAT (and issues you a private IP address) or something similar, that might explain it. Otherwise, I would be suspicious.
If you run those commands without the +short you will see that the TTL values for those responses are less than 59 (which for Google Public DNS, indicates they are cached, and explaining why the IP addresses shown are not yours).
The o-o.myaddr.l.google.com domain is a feature of Google's authoritative name servers (ns[14].google.com) and not of 8.8.8.8. You can send similar queries through 1.1.1.1 (where you will see that there is no EDNS Client Subnet data provided, improving the privacy of your DNS but potentially returning less accurate answers, as Google's authoritative servers do not have your IP subnet, but only the IP address of the CloudFlare resolver forwarding your query.
This is the Cloudflare resolver, right? What's the "privacy-first" part about? It's just another third party DNS host. They haven't changed the protocol to be uninspectable and AFAIK haven't made any guarantees about logging or whatnot that would enhance privacy vs. using whatever you are now. This just means you're trusting Cloudflare instead of Comcast or Google or whoever.
"We will never log your IP address (the way other companies identify you). And we’re not just saying that. We’ve retained KPMG to audit our systems annually to ensure that we're doing what we say."
Now, audits are generally not worth very much (even, perhaps even especially, from a Big Four group like KPMG), but for this type of thing (verifying that a company isn't doing something they promised they would not do) they're about the best we have.
Worth noting they have already edited the article (less than 2hours later) and taken out the "We will never log your IP" bit...
"We committed to never writing the querying IP addresses to disk and wiping all logs within 24 hours."
"While we need some logging to prevent abuse and debug issues, we couldn't imagine any situation where we'd need that information longer than 24 hours. And we wanted to put our money where our mouth was, so we committed to retaining KPMG, the well-respected auditing firm, to audit our code and practices annually and publish a public report confirming we're doing what we said we would."
It's not uncommon to retain logs like that for debugging purposes, abuse prevention purposes, etc, but then to go back later and wipe them or anonymize them.
Having dealt with KPMG recently (which I do at least once a year...), I would not expect to see the report.
KPMG's risk department - the lawyers' lawyers - appears to be violently allergic to their customers disclosing any report to outside parties. Based on my experience you can get a copy, but first you and the primary customer need to submit some paperwork. And among the conditions you need to agree with is that you don't redistribute the report or its contents.
Disclosure: I deal with security audits and technical aspects of compliance.
> KPMG's risk department - the lawyers' lawyers - appears to be violently allergic to their customers disclosing any report to outside parties.
Isn't that the entire point of such an audit? To be able to present it to outside third-parties?
For examples, Mozilla (CA/B) requires audits for root CAs. The CA must provide a link to the audit on the auditor's public web site -- forwarding a copy or hosting it on their own isn't sufficient.
You'd think, but it's surprisingly difficult to get the real full audit report. Mozilla's root policy _does_ require that they be shown the report, and has a bunch of extra requirements in there to ensure they're more detail, rather than some summary or overview document the auditors were persuaded to produce for this purpose. But the CA/B rules would allow just an audit letter which basically almost always says "Yes, we did an audit, and everything is fine" unless the auditors weren't comfortable writing "everything is fine". And almost always they feel that a footnote on a sub-paragraph buried in a detailed report is enough to leave "everything is fine" as the headline in the letter...
If you've ever been audited for some other reason, you'll know they find lots of things, and then you fix them, and that's "fine". But well, is it fine? Or, should we acknowledge that they found lots of things and what those things were, even if you subsequently fixed them? The CA/B says you have several months to hand over your letter after the audit period. Guess what those months are spent doing...
First of all, KPMG is the name of a group. All the Big Four are arranged as group companies, a single financial entity owns the name (e.g "KPMG", "EY") from some friendly place, (London in all but one case) and licenses out the right to operate a member company to professional services companies in various jurisdictions around the world. The group has the famous name, and sets some rules about training and compliance, but the employees will (almost all) work for the local member companies even though reporting for lay people will say the group name, as they do here.
Secondly, the idea in audit is not really about digging into the engineering. So although they will need people who have some idea what DNS is, they don't need experts - this isn't code review. The auditors tend to spend most of their time looking at paperwork and at policy - so e.g. we don't expect auditors to discover a Raspberry Pi configured for packet logging hidden in a patchbay, but we do expect them to find if "Delete logs every morning" is an ambition and it's not anybody's job to actually do that, nor is it anybody's job to check it got done.
I think it's somewhere in between, the article itself states:
"to audit our code and practices annually and publish a public report confirming we're doing what we said we would."
I run an investment fund (hedge fund) and we are completing our required annual audit (not by KPMG). It is quite thorough, they manually check balances in our bank accounts directly with the bank, they verify balances directly off blockchain (it's a crypto fund) and have us prove ownership of keys by signing messages, etc. And they do do a due diligence (lots of doodoo there) that we are not doing scammy things like the equivalent of having a raspberry pi attached to the network. Now this is extremely tough of course, and they are limited in what they can accomplish there, but the thought does cross their mind. All firms are different, but from what we've seen most auditors do decent good jobs most of the time. Their reputation can only be hit so many times before their name is no longer valuable to be an auditor.
Cloudflare is making a public pronouncement that they're not going to sell your DNS data nor track your IP address, with the implication that they will also not use the usage data to upsell you services. That's about the only additional "privacy" edge they offer.
In the same breath, they insinuate that Google both sells and uses DNS usage from their 8.8.8.8 and 8.8.4.4 resolvers.
They are NOT saying Google is lying and collecting the data. They are saying the business model of Google inherently provides such incentive.
Cloudflare is somewhat right: Means, Motive and Opportunity - but for a conviction you have to prove someone acted on the Opportunity. The Motive of Google is tampered with severe risk for loosing trust.
Cloudflare can make an argument they are fundamentally better positioned and that is all they do. As with all US based operations the NSA may cook up some convincing counterarguments and we may never know.
>"They are NOT saying Google is lying and collecting the data."
The OP did not say that cloudflare is "saying" that. The OP very clearly said they are "insinuating" it. And yes under the heading "DNS's Privacy Problem" the post mentions:
"With all the concern over the data that companies like Facebook and Google are collecting on you,..."
I think that juxtaposition of this statement under a bolded heading of "DNS's Privacy Problem" is very much insinuating that.
Bear in mind, Google's changed its mind before and can again at any time. For instance, when they bought DoubleClick they promised not to connect it with the Google account data they had. Then they changed that policy later.
Is the suggestion that a company whose main business is targeting ads based on collecting data about you might be collecting data about you an unfair insinuation?
Please follow the thread - the question of whether an insinuation if "fair" is not what's being discussed. What's being discussed is whether or not Cloudflare said or insinuated that there were privacy concerns with using 8.8.8.8.
> they insinuate that Google both sells and uses DNS
I don't think it's intended to say anything about Google specifically. Keep in mind that there are many other DNS services out there, and some of them are known for being pretty scummy, e.g. replacing NXDOMAIN results with "smart search" / ad pages.
"Privacy First: Guaranteed.
We will never sell your data or use it to target ads. Period.
We will never log your IP address (the way other companies identify you). And we’re not just saying that. We’ve retained KPMG to audit our systems annually to ensure that we're doing what we say.
Frankly, we don’t want to know what you do on the Internet—it’s none of our business—and we’ve taken the technical steps to ensure we can’t."
They want fast resolution of names that point to websites hosted by Cloudflare. Cloudflare makes their money selling their network to businesses that use it, and anything that makes that service better for the end-user increases customer stickiness.
Maybe not _as_ relevant, but still a considerable number of clients are configured to trust OpenDNS, and their far more ambiguous stance on what exactly this is for is appealing to some people. For example, OpenDNS says yes, absolutely it is their business what you're looking up, and maybe you are a Concerned Parent™ who wants to ensure their children don't access RedTube, so that feels like a good idea.
I was thinking more along the lines of their SME offering. DNS filtering is an important layer in network security and CloudFlare’s position of being in the middle of a large portion of Internet traffic, alongside now trying to attract a chunk of general DNS queries, potentially gives them a great deal of insight into who the bad actors are.
I think the whole point for such free services is to log that data and extract statistical meaning out of it - in this case, they pledge to use an anonymized format. On the other hand CloudFlare's mission (ensure secure, solid end to end connectivity) is much better aligned with the user's needs than Google's mission (sell more ads).
Google is one of the first ones using DNS over HTTPS.
BTW if you want to use DNS over HTTPS on Linux/Mac I strongly recommend dnscrypt proxy V2 (golang rewrite) https://github.com/jedisct1/dnscrypt-proxy and put e.g. cloudflare in their config toml file to make use of it.
Not really. Typically the query includes much more information (the site you want to visit) than the response (an IP potentially shared by thousands or millions of sites).
If it was easy, it would have been done during the TLS 1.3 process, but after a lot of discussion we're down to basically "Here is what people expect 'SNI encryption' would do for them, here's why all the obvious stuff can't achieve that, and here are some ugly, slow things that could work, now what?"
It is hard because of the TLS's pre-PFS legacy and to some extent also because of (very meaningful) intention to reduce roundtrips. The way to do SNI-like stuff is obvious: negotiate unauthenticated encrypted channel (by means of some EDH variant, you need one roundtrip for that) and perform any endpoint authentication steps inside that channel. This is what SSH2 does and AFAIK Microsoft's implementation of encrypted ISO-on-TCP (eg. rdesktop) does something similar.
Edit: in SSH2 the server authentication happens in the first cryptographic message from server (for the obvious efficiency reasons), and thus for doing SNI-style certificate selection there would have to be some plaintext server-ID in first clients message, but the security of the protocol does not require that as long as the in-tunnel authentication is mutual (it is for things like kerberos).
So, it feels like you're saying this is how SSH2 and rdesktop work, and then you caveat that by saying well, no, they actually don't offer this capability at all it turns out.
You are correct that you can do this if you spend one round trip first to set up the channel, and both the proposals for how we might encrypt SNI in that Draft do pay a round trip. Which is why I said they're slow and ugly. And as you noticed, SSH2 and rdesktop do not, in fact, spend an extra round trip to buy this capability they just go without.
This does not make sense. Either people are not concerned about hiding their traffic or if they are it follows they would be equally if not much more concerned about Google that can track them across devices and build far more indepth invasive profiles than the ISP.
Aside it's strange https everywhere has been pushed aggressively by many here under the bogeyman of ISP adware and spying while completely ignoring the much larger adware and privacy threats posed by the stalking of Google, Facebook and others. It is disingenuous and insincere.
I can only really discuss the UK, since that's the only place where I've bought home ISP service.
Only a handful of small specialist firms actually just move bits in the UK. Every single UK ISP big enough to advertise on television is signed up to filter traffic and block things for being "illegal" or maybe if Hollywood doesn't like them, or if they have "naughty" words mentioned, or just because somebody slipped. If you're thinking "Not mine" and it runs TV adverts then, oops, nope, you're wrong about that and have had your Internet censored without realising it. I wonder how ISPs got their bad reputation...
Did you read the page? They're supporting DNS over TLS and DNS over HTTPS - both changes to the protocol to make in uninspectable. They've also said they're not logging IP info and they're getting independent auditors in to confirm what they're saying. Sounds trustworthy to me
Both encrypted extensions are of course inspectable at the end-point, which is the privacy model being discussed.
What is intriguing to me is why Cloudflare are offering this. Perhaps it is to provide data on traffic that is 'invisible' to them, as in it doesn't currently touch their networks. Possibly as a sales-lead generator.
Or is the plan to become dominant and then use DNS blackholing to shutdown malware that is a threat to their systems?
The goal is to make the sites that use Cloudflare ridiculously fast by putting the authoritative and recursive DNS on the same machine (for clients who use 1.1.1.1).
Cloudflare is already a significant enough player in handling Internet traffic. Maybe the company does want to do good for the sake of doing good, but I’m wary of companies taking over in this manner and making the Internet more like a monolith than a distributed system.
It seems like bait-and-switch though? They tell about DNS over https and dns without logging, and then direct to an installation instruction where you can learn to start to use, "DNS without logging", but nothing that's encrypted? What am I missing?
When I've seen DNS-over-HTTPS in the past I've always thought it odd that it's setup with a DNS name for the HTTPS address, requiring a plain DNS lookup before it starts using HTTPS. I assumed this was done because they didn't have a valid TLS cert for the IP address. But 1.1.1.1 actually has a valid TLS cert, yet their setup instructions say to use the DNS name cloudflare-dns.com instead of the IP.
I suppose I see your point, but since DNS-over-HTTPS only supports HTTPS (not HTTP) and therefore requires a valid certificate for the requested resolver, there's no risk of the protocol being downgraded to HTTP or easily spoofed.
That is a good point, though I wasn't thinking about it from a security perspective. I was more imagining an ISP or nation that is trying to control content by blocking/faking DNS queries. They could block the first DNS query if DNS-over-HTTPS doesn't use an IP for the resolver.
Of course an ISP or nation could block/reroute the IP 1.1.1.1 too, so maybe it doesn't matter. Neither way would allow MITM, I was just thinking about ways oppressive ISPs/nations could stop DNS-over-HTTPS from working.
Not surprising: Google despite being blocked in China a lot of presumably expensive paid transit from the big 3 mainland china telcos in and out of the mainland to Hong Kong.
Cloudflare serves sites visited from China that aren't using their China-requires-an-ICP-license service from their west coast USA location where the big 3 Chinese telcos will peer for free.
Yup, the Subject Alternative Name (often misunderstood as an alias, but "Alternative" here is meant in the sense of this is the Internet's _Alternative_ way to name things versus the X.500 series directory hierarchy that the X.509 certificates are originally intended for) can be one of several distinct types, the two relevant for servers are dnsName and ipAddress. dnsName can be any er, name, in the DNS hierarchy, or, as a special case, a "wildcard" with asterisks, whereas ipAddress can be any type of IP address, currently either IPv4 or IPv6.
The Baseline Requirements agreed between Web Browser vendors and root Certificate Authorities dictate how the CA can figure out if an applicant is allowed a certificate for a particular name, for dnsNames this is the Ten Blessed Methods, for ipAddress the rules are a bit... eh, rusty, but the idea is you can't get one for that dynamic IP you have from your cable provider for 24 hours, but somebody who really controls the IP address can get one. They're uncommon, but not rare, maybe a dozen a day are issued?
Your web browser requires that the name in the URL exactly matches the name in the certificate. So if you visit https://some-dns-server.example/ the certificate needs to be for some-dns-server.example (or *.example) and a certificate for 1.1.1.1 doesn't work, even if some-dns-server.example has IP address 1.1.1.1 - so this cert is only useful because they want people actually typing https://1.1.1.1/ into browsers...
[edited, I have "Servers" on the brain, it's _Subject_ Alternative Name, you can use them to name email recipients, and lots of things that aren't servers]
Thanks for the clarification. I did know it was possible when setting up CA's for VPN servers, they can use certificates with DNS and/or IP as identifiers. Somehow I never thought about certificates for public IP addresses.
Edit: I had not realised what the parent comment here meant, that you can coonect to an IP address without getting an error by adding the IP to the SAN. My explanation bellow is about finding certs installed for a given IP/hotname, typically with openssl.
Yes, but...
This only works if they don't use SNI[1]. If they use SNI then you just get the default cert. They might have more certs for other hostnames served on that IP address.
So, one thing I'd love to see clarified: APNIC was interested in studying the junk traffic to 1.1.1.1. Cloudflare's DNS will not log or track. So what is logged and tracked for APNIC's research purposes? Everything but DNS? Everything but DNS and HTTPS requests directly to 1.1.1.1 (presumably people looking for details on Cloudflare DNS?).
What's being studied?
Fun fact: CCNA classes regularly use 1.1.1.1 as a router-id. Really good reason now not to configure it via a loopback address.
My strong impression is that they wouldn't give APNIC any data that can be used to identify users of their DNS service, but I'd definitely love a more detailed answer than what the site currently provides.
> We will be destroying all “raw” DNS data as soon as we have performed statistical analysis on the data flow. We will not be compiling any form of profiles of activity that could be used to identify individuals, and we will ensure that any retained processed data is sufficiently generic that it will not be susceptible to efforts to reconstruct individual profiles. Furthermore, the access to the primary data feed will be strictly limited to the researchers in APNIC Labs, and we will naturally abide by APNIC’s non-disclosure policies.
Most of those terms relate to APNIC "ad" placement, and it specifies as such. They likely do not apply here, as it seems Cloudflare is not tracking the IP address, and things like browser fingerprinting wouldn't even show up in a DNS request.
The highlight point to me is that they not only say that won't collect data that could be used to identify individuals, but seem to realize even seemingly anonymized data can be traced back to individuals too, hence the further claim.
I'm inclined to give APNIC the benefit of the doubt, they're a nonprofit, and a fundamental part of the Internet's addressing structure, but it'd be nice to get a bit more detail from them on what they :do: collect.
"Cloudflare's 1.1.1.1 DNS will respond very fast, but the big sites you access, the whole reason for resolving DNS, will be SLOWER ∵ no edns-client-subnet support, so no geolocation of results." - https://twitter.com/philpennock/status/980561009961299968
Cloudflare runs from 151 (and growing rapidly) locations worldwide. Without edns-client-subnet, the upstream DNS server will probably respond according to the geolocation of the Cloudflare location you're talking to -- which is probably pretty close to you, and therefore will probably produce a good outcome for you, while largely avoiding the privacy concerns.
Using ping to compare the two may introduce a skew based on how the two networks prioritize ICMP.
For example, from my network google is averaging a faster response by ~.5ms
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=28.0 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=19.2 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=19.1 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=19.0 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=20.5 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=59 time=19.6 ms
^C
--- 1.1.1.1 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5010ms
rtt min/avg/max/mdev = 19.043/20.950/28.072/3.226 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=19.1 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=20.1 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=20.6 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=54 time=21.1 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=54 time=21.9 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=54 time=19.4 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5008ms
rtt min/avg/max/mdev = 19.114/20.414/21.922/0.988 ms
However, if i do DNS lookups against a few major domains, google is actually slower by ~2ms
$ for domain in microsoft.com google.com cloudflare.com facebook.com twitter.com; \
do cloudflare=$(dig @1.1.1.1 ${domain} | awk '/msec/{print $4}'); \
google=$(dig @8.8.8.8 ${domain} | awk '/msec/{print $4}');\
printf "${domain}:\tcloudflare ${cloudflare}ms\tgoogle ${google}ms\n";\
done
microsoft.com: cloudflare 22ms google 23ms
google.com: cloudflare 19ms google 22ms
cloudflare.com: cloudflare 19ms google 23ms
facebook.com: cloudflare 21ms google 20ms
twitter.com: cloudflare 19ms google 21ms
You'd have to run a bunch of queries to see if there is an actual impact vs. just an outlier (e.g. the first ping response from cloudflare), just wanted to point it out.
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=13.8 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=14.6 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=13.7 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=14.1 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=13.7 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=59 time=15.3 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=46 time=43.5 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=46 time=42.3 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=46 time=43.1 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=46 time=42.0 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=46 time=42.4 ms
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=19.6 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=19.9 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=55 time=19.8 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=55 time=19.7 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=55 time=19.8 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=55 time=19.7 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=55 time=19.8 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=55 time=19.7 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=55 time=19.8 ms
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=0.390 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=0.565 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=0.472 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=0.556 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=0.560 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=57 time=0.573 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=57 time=0.359 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=57 time=0.575 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=57 time=0.543 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=57 time=0.548 ms
From Zagreb, Croatia.
I guess that new cloudflare POP is paying off.
My ISP peers with cloudflare in Sydney (~40ms), even though there is a CF datacenter in Auckland, New Zealand (~10ms)
I'm in Wellington.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=37.9 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=36.9 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=36.7 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=35.9 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=35.4 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=35.2 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=35.2 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=35.7 ms
How are you getting those single digit times? I can never get below 15 ms for both Google and CloudFlare. Any tips to improve this or its beyond my control?
~ ping -c 10 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=1.15 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=1.15 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=64 time=1.06 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=64 time=1.04 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=64 time=1.03 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=64 time=1.01 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=64 time=1.02 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=64 time=1.07 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=64 time=1.00 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=64 time=0.848 ms
--- 1.1.1.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9009ms
rtt min/avg/max/mdev = 0.848/1.042/1.153/0.086 ms
~ ping -c 10 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=6.82 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=6.72 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=6.39 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=6.73 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=6.55 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=56 time=6.14 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=56 time=6.24 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=56 time=6.22 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=56 time=6.19 ms
64 bytes from 8.8.8.8: icmp_seq=10 ttl=56 time=6.30 ms
--- 8.8.8.8 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9011ms
rtt min/avg/max/mdev = 6.149/6.433/6.826/0.248 ms
$ ping -c 10 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=1789.957 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=19.620 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=60 time=9.372 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=60 time=11.585 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=60 time=20.660 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=60 time=11.808 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=60 time=12.784 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=60 time=11.908 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=60 time=11.373 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=60 time=11.992 ms
--- 1.1.1.1 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 9.372/191.106/1789.957/532.962 ms
$ ping -c 10 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=60 time=1308.156 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=17.557 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=13.043 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=60 time=16.217 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=60 time=15.033 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=60 time=15.132 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=60 time=14.157 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=60 time=16.100 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=60 time=15.600 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=60 time=13.837 ms
--- 8.8.8.8 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 13.043/144.483/1308.156/387.893 ms
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=3.57 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=3.30 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=3.31 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=3.21 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=58 time=3.21 ms
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=3.15 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=3.17 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=2.34 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=2.93 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=3.19 ms
MyRepublic:
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=1.88 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=60 time=1.93 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=60 time=1.96 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=60 time=1.85 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=60 time=1.85 ms
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=1.86 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=59 time=1.66 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=59 time=1.40 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=59 time=1.38 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=59 time=1.60 ms
Looks like Google DNS's still a little bit faster.
These are only averages though, and by testing a bit more with uncached domains I found the first hit will take a lot longer with cloudflare than with google.
Microsoft Windows [Version 10.0.16299.309]
(c) 2017 Microsoft Corporation. All rights reserved.
C:\Users\ram>tracert 1.1.1.1
Tracing route to 1dot1dot1dot1.cloudflare-dns.com [1.1.1.1]
over a maximum of 30 hops:
1 6 ms 11 ms 5 ms 192.168.1.1
2 5 ms 5 ms 23 ms 10.4.224.1
3 * * * Request timed out.
4 15 ms 7 ms 10 ms 103.56.229.1
5 * * * Request timed out.
6 45 ms 56 ms 44 ms 115.255.252.225
7 86 ms 84 ms 87 ms 62.216.144.77
8 169 ms 173 ms 175 ms xe-2-0-4.0.cjr01.sin001.flagtel.com [62.216.129.161]
9 174 ms 174 ms 169 ms ge-2-0-0.0.pjr01.hkg005.flagtel.com [85.95.25.41]
10 173 ms 174 ms 170 ms xe-3-2-2.0.ejr04.seo002.flagtel.com [62.216.130.25]
11 171 ms 173 ms 170 ms 1dot1dot1dot1.cloudflare-dns.com [1.1.1.1]
Trace complete.
C:\Users\ram>tracert 8.8.8.8
Tracing route to google-public-dns-a.google.com [8.8.8.8]
over a maximum of 30 hops:
1 88 ms 305 ms 98 ms 192.168.1.1
2 13 ms 98 ms 102 ms 10.4.224.1
3 * * * Request timed out.
4 * 16 ms * 10.200.200.1
5 9 ms 3 ms 8 ms 209.85.172.217
6 11 ms 5 ms 9 ms 108.170.251.103
7 40 ms 33 ms 37 ms 209.85.246.164
8 * 90 ms 89 ms 209.85.241.87
9 89 ms 86 ms 89 ms 216.239.51.57
10 * * * Request timed out.
11 * * * Request timed out.
12 * * * Request timed out.
13 * * * Request timed out.
14 * * * Request timed out.
15 * * * Request timed out.
16 * * * Request timed out.
17 * * * Request timed out.
18 * * * Request timed out.
19 87 ms 82 ms 87 ms google-public-dns-a.google.com [8.8.8.8]
Trace complete.
C:\Users\ram>tracert resolver2.opendns.com
Tracing route to resolver2.opendns.com [208.67.220.220]
over a maximum of 30 hops:
1 3 ms 7 ms 8 ms 192.168.1.1
2 12 ms 11 ms 41 ms 10.4.224.1
3 * * * Request timed out.
4 21 ms 21 ms 51 ms 103.56.229.1
5 * 62 ms 12 ms 115.248.235.150
6 * 408 ms 65 ms 115.255.252.229
7 43 ms 49 ms 40 ms 14.142.22.201.static-Mumbai.vsnl.net.in [14.142.22.201]
8 * 41 ms 57 ms 172.23.78.237
9 46 ms 32 ms 29 ms 172.19.138.86
10 73 ms 46 ms 42 ms 115.110.234.50.static.Mumbai.vsnl.net.in [115.110.234.50]
11 41 ms 64 ms 44 ms resolver2.opendns.com [208.67.220.220]
I am getting ERR_CERT_AUTHORITY_INVALID because my ISP-provided router is intercepting the connection and trying to show me a "helpful" configuration wizard. No Cloudflare DNS for me.
To be explicit: This is not Cloudflare's fault and we should blame the manufacturer of the router, or the ISP for deploying their custom "friendly" settings. But it is what it is.
Yup, these ranges are poisonous, which is why APNIC kept them, so this is effectively to be expected. It would actually be extraordinary if, since the range was determined to be poisonous and so mustn't be delegated this had magically fixed itself. So I was sort-of expecting to see some comments in the last thread about 1.1.1.1 like yours.
The "good" news is that this isn't being used for anything you really need - imagine if 1.1.1.1 had been delegated and now it was the resolution for www.facebook.com or indeed news.ycombinator.com ...
The bad news is that idiots do not learn from their mistakes, that's Dunning Kruger, the people who built your device don't understand why this was the Wrong Thing™ and won't now seek to do better in future. If we're lucky they'll go out of business, but that's the best we can hope for.
I've noticed I measured with my VPN on, so I put the VPN measurements in brackets behind the nominal values. The 8.8.8.8 benchmark is a bit odd but I repeated it several times with 100 iterations each and this is basically what I get.
Where are you located? I am in the rural north Bay Area California and my numbers are shocking:
Ping statistics for 1.1.1.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 2ms, Average = 1ms
Ping statistics for 8.8.8.8:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 25ms, Maximum = 27ms, Average = 26ms
Ping statistics for 8.8.4.4:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 26ms, Maximum = 28ms, Average = 27ms
I'm not aware of any pihole native setting to enable a resolver instead of using forwarding.
Third-party services like this will also have a huge range of queries cached so the response time will definitely be better than have a rasppi with little free memory try/attempt to cache that.
> What many Internet users don't realize is that even if you're visiting a website that is encrypted — has the little green lock in your browser — that doesn't keep your DNS resolver from knowing the identity of all the sites you visit. That means, by default, your ISP, every wifi network you've connected to, and your mobile network provider have a list of every site you've visited while using them.
> Network operators have been licking their chops for some time over the idea of taking their users' browsing data and finding a way to monetize it.
The "1.1.1.1 stops ISPs/Starbucks from selling your browsing history" pitch is untrue and, given Cloudflare's expertise, seems disingenuous.
HTTPS transmits domains unencrypted in request headers, to support SNI. So even if DNS lookups are completely hidden, my ISP can still log all domains I visit by inspecting my HTTP(S) requests.
And the domain log from my web requests is more valuable than my DNS log. Advertisers and data aggregators can see the true timing and frequency of my browsing history, whereas a DNS log is affected by router/OS/browser lookup caching.
I agree that a non-Google public resolver, which comes with guarantees about how they'll use your data, is a good thing.
I'm taking exception with Cloudflare's announcement, which makes a pitch to end users that CF can protect your domain history from ISP snooping, then links to a two-minute setup guide for people with "no technical skill". They really can't protect your domain history, and I feel bad for people using this service who have been led to believe otherwise.
AFAIK there is nothing in the TLS 1.3 draft [1] about SNI encryption. There are other draft proposals for SNI encryption that build on top of TLS 1.3 [2]. It's a hard problem and there are no deployed solutions I'm aware of.
This is super exciting -- Public DNS space frankly needs more entrants.
I've been a long time user of OpenDNS's public DNS service (and have come to adore it greatly). Other recent new entrant to this space worth mentioning includes Global Cyber Alliance's [0] Quad9 DNS service, launched in Q4 2017.
This to me looks like a good move by Cloudflare, business model wise, given the increasing awareness among general public to the dangers of privacy breaches -- aside from the supposed boost in network speed piggybacking off of Cloudflare's extensive server farm network [1].
Whether the service delivers on it's bold claims, however, is to be seen. I'm going to go give this a shot now.
Is there a service that Quad9 offers that does not have the blocklist or other security?
The primary IP address for Quad9 is 9.9.9.9, which includes the blocklist, DNSSEC validation, and other security features. However, there are alternate IP addresses that the service operates which do not have these security features. These might be useful for testing validation, or to determine if there are false positives in the Quad9 system.
Secure IP: 9.9.9.9 Provides: Security blocklist, DNSSEC, No EDNS Client-Subnet sent. If your DNS software requires a Secondary IP address, please use the secure secondary address of 149.112.112.112
Unsecure IP: 9.9.9.10 Provides: No security blocklist, DNSSEC, sends EDNS Client-Subnet. If your DNS software requires a Secondary IP address, please use the unsecure secondary address of 149.112.112.10
Note: Use only one of these sets of addresses – secure or unsecure. Mixing secure and unsecure IP addresses in your configuration may lead to your system being exposed without the security enhancements, or your privacy data may not be fully protected
"Here are some DNS measurements comparing @Google Public DNS, @Quad9DNS and @Cloudflare, v6 and v4. Sourced from AS3320 near Frankfurt. Quad9 is fastest in avg. The proposed v6 address from Cloudflare is not yet working, but the longer ones."
Cloudflare is saying they won't log and will have audits by KPMG yearly to prove as such. Not logging and logging anonymized data are different approaches.
FTA: While we need some logging to prevent abuse and debug issues, we couldn't imagine any situation where we'd need that information longer than 24 hours.
So the difference is how long the logs are kept, and possibly what the log data is used for.
I don't use them (even though I would love to) because it takes approximately 3x as long to reach the server.
To compare the two, together with Google's DNS as a reference, from a fast connection:
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=3.62 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=60 time=3.60 ms
64 bytes from 9.9.9.9: icmp_seq=5 ttl=60 time=9.20 ms
...and from a slower (home) connection:
64 bytes from 1.1.1.1: icmp_seq=5 ttl=58 time=11.1 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=59 time=11.9 ms
64 bytes from 9.9.9.9: icmp_seq=5 ttl=59 time=34.2 ms
Note that I just used the speed of every fifth package instead of the average for five packets in order to keep the comment relatively short and more humanly readable than "rtt min/avg/max/mdev".
Do you think that ~23ms is going to make any real, perceptible difference to your internet performance? Considering that a) your browser will make any DNS requests it needs in parallel when loading a web page, and b) most DNS requests will be cached anyway.
I'm not sure what you meant in point (a) but, of course, DNS cannot be parallelized with HTTP since the browser doesn't know where to connect until DNS completes. Also, DNS requests for subresources can't start until the referring resource has been loaded. So you could easily see a few serialized DNS requests in the long pole for loading a web site.
Also note that the timing above were ping times. An actual DNS query will have to recurse if the result is not cached at the DNS server -- which in these days of 60-second TTLs for is not uncommon. Cloudflare, though, happens to be the authoritative DNS for quite a few web sites, in which case no recursion is necessary.
I meant that DNS requests are parallelized within the browser. Once it loads the initial resource (html), there might be 10 more dependencies it needs at various different URLs under different domain names. It's usually loading all these dependencies that make up the vast majority of the load time on a complex web page.
Those subsequent DNS requests can of course be made in parallel, so if your DNS latency is 20ms then you're adding ~20ms, not 10 x 20ms.
Even then, DNS is probably making up a small fraction of the overall load time. If a complex page is taking, say, 3000ms to load and render, then adding 20-40ms of DNS time is not going to make a perceptible difference.
I'm curious to read the reports from the garbage traffic they get at their 1.x.x.x addresses. Must be a ton of computers sending traffic that way. On the other hand, there's probably quite a few networks where 1.x.x.x is unreachable or routed to a local captive network access server, too.
There’s more to dns performance than query time. Cloudflare doesn’t seem to be sending the EDNS client subnet to authoritative resolvers, which means those resolvers can’t give sensible nearest-to-client responses. This is a crucial feature of what makes the modern web fast.
It would be hard to claim to be a dns service which helps protect your privacy while also forwarding your subnet info on to other DNS servers.
Cloudflare has a large number of PoPs and are increasing them rapidly. If the service is distributed to them all than the authoritative server is likely to give a response that is similar to that it would have provided if the subnet had been explicitly provided since the Cloudflare PoP sending the request will be located network wise close to the client that originally made the request. This isn't always going to be true but the slightly higher odds that you will not connect to the optimal location for the service you are connecting to is probably worth the increase in privacy.
What exactly is the privacy threat model in this situation? If you are about to connect to the resolved service it makes no difference that you hid your subnet from that service’s DNS server.
What if a client blackholes all traffic to some network due to some privacy-related reason? If cloudflare tells that provider (via name resolution) who's resolving names, some of that client's PII is possibly shared before the blackhole decision can even be made.
That seems a bit contrived but just rolling with it, this hypothetical org with ultra-sensitive opsec should have also blacklisted the domain in question at their inside resolver.
To the downvoters: perhaps a source will placate you: https://www.grc.com/sn/sn-641.htm (search for WINE). I apologize for providing facts that might help someone that wants to run this.
If I send a request to 1.0.0.1 for a specific RR that I'm 99.9% certain isn't cached (although I didn't check the query logs on the authoritative DNS servers to verify a request actually came in), the response contains the (expected) TTL of 14400.
If I then send the same request to 1.1.1.1, I get a response that is identical except with a TTL of 3591 seconds.
According to the timestamps in my client, the second request was made nine seconds after the first one (3591+9=3600), hence my question: is Cloudflare "overriding" the TTL I explicitly set on this specific RR (14400s) with a different TTL (i.e., 3600s)?
Yes, there's a cap on both negative and positive cache lifetime. The reason is reducing the blast radius as accidents happen, and it hurts especially on long infrastructure records (mistake during repointing NSs, bad glue, expired DS etc.)
We're going to be looking into making the cap more dynamic over time.
I do this at home as well, using Unbound DNS to set a min and max TTL. It's taboo on public DNS recursors, but totally makes sense. Some folks try to use DNS as real time load balancers and will set crazy low TTL's like 1 second or even 0 (which violates RFC's)
Sorry, but the only DNS resolver which can really claim to be "privacy first" and can be completely trusted is the one built with opensource code running on your own system.
So a VPS with enough storage plus Unbound and you're pretty much done in regards to "privacy first" and "trust".
> We committed to never writing the querying IP addresses to disk and wiping all logs within 24 hours.
> Cloudflare's business has never been built around tracking users or selling advertising. We don't see personal data as an asset; we see it as a toxic asset. While we need some logging to prevent abuse and debug issues, we couldn't imagine any situation where we'd need that information longer than 24 hours.
How about aggregate stats? Will CloudFlare be keeping track of any long term usage statistics per domain?
I'm not talking about tracking the person making the request. I'm referring to tracking the hostnames that are being resolved. Given the near 1:1 mapping between user's accessing a website and DNS resolution for that website[1], wide scale usage of something like this gives decent analytics on net usage of any website even if it's not served by CloudFlare.
[1]: Assuming the DNS response cache times are low enough that a new user session to a website would require a fresh DNS request to resolve the website's IP.
Lately I’ve been thinking about some concerns about domain name privacy:
• My ISP can spoof DNS responses.
• My ISP can sniff DNS requests.
• My ISP can sniff SNI.
• My ISP can look up reverse DNS on the IPs I visit.
DNS over TLS is nice—I just set up Unbound on my router to use 1.1.1.1@853 and 1.0.0.1@853 as forwarding zones. That eliminates the first bullet, at the cost of allowing CloudFlare to track my DNS requests.
I wonder how easy it is to route DNS‐over‐TLS over Tor?
It’s not like I’d be running everything over Tor. DNS requests for newly‐visited domains would slow down, but unbound’s prefetch feature would keep popular frequently‐used domains cached. Adding one of those advertising domain blacklists might help performance too.
The point would be to keep Cloudflare from being able to track my DNS requests.
A VPN gives you little protection against browser fingerprinting, which may alone leak enough information about you to identify you. Also privacy-by-policy is in no way near privacy-by-design. If you want privacy, use the Tor Browser.
I would love using better DNS resolvers like this than crappy ISP provided ones.
My only complain is when you connect to public wifi that requires to display some wifi capture page, acceptance of ToS, to sign in with your room number, airliner wifi, etc. Usually they break when you don't use their automated provided DNS servers. Requiring you to remove your preferred DNS entries, waiting for the wifi popup to open, do the required thing, and put back your preferred DNS servers. I end up just keeping the defaults, and that's a shame.
Put this DNS in your home router and not directly in your PC. Now your PC will fetch DNS at your home from this fast DNS and on pubic wifi, it will use theirs.
Some ISPs and their routers don't allow for the DNS settings to be changed, unfortunately. Still can be worked around, but sometimes the easiest solution is to just edit the DNS settings directly.
Very impressed so far. I wonder if Cloudflare already has or is going to provide an IP address lookup service too, like OpenDNS and Google have? I find it quite useful to be able to just do something like:
dig -4 +short myip.opendns.com a @resolver1.opendns.com
A big PITA for me right now with friends and family is changing DNS. They all have these Xfinity cable modem boxes that have integrated WiFi and Ethernet. It's not possible to change the DNS through the web interface. So I have to convince everyone to buy a separate AP or a 3rd party (but ISP approved) cable modem, and then what ensues is I'm now responsible for that device because Xfinity washes their hands entirely if there are any problems.
I'm not sure which modem you have, but the Cisco modem I used to use with the built-in WiFi just as you describe absolutely has the ability to go in and edit the DNS servers assigned by DHCP under Connection > Local IP Network.
I also have the remote access enabled for my family members so I can diagnose and make changes like this directly on their modem.
ARRIS Group, Inc. TG1682G less than a year old. This is what everyone has in Denver, as far as I'm aware. Most of the devices settings aren't managable by its own web interface, I have to go to xfinity.com/myxfi and login to the account, and then it pushes changes to the cable modem/AP. This includes the login password for the device's web interface. Thoroughly screwy in my opinion.
Anyway, there is a Connection > Local IP Network. But no DNS settings anywhere.
Does it work well with (non-CloudFlare) CDNs or is this another DNS service that won’t work with Netflix on a Friday night because it routes everyone to a single edge mode?
You should be measuring DNS time, not ping time. There's more to how fast a DNS resolver responds than the time it takes to send the packet over the wire.
As a Comcast@Home subscriber in SF, 1.1.1.1 is approximately 3x as fast as Comcast's own DNS (testing using dig).
A couple of years ago there was a tool built by some Google employee called namebench which benchmarked a couple of DNS servers and helped you to find the best one based on your browser history. Unfortunately, the project seems abandoned:
Maybe they could do some form of auditing that is traceable by the average consumer. For instance:
They could make their deployment setup completely automated and publish the tooling to github, and have video evidence of them deploying the same SHA-256 stamped tooling to their data centers. They could expose operational details and transactions on their DNS servers as far as possible without revealing identifiable information. They could have regular physical audits by a constantly rotating set of well known and trusted parties (i.e. EFF, Mozilla).
My main problem with these DNS services is that they often break gated wifi networks that require a login page to access. It's horrible that it has become standard practice to take over DNS to redirect to a access gate — but as users, your choices are either: suck it up, or no internet.
According to the Firefox bug report, this was an Apple bug (related to TCP Fast Open) fixed in macOS 10.13.4. When I updated to 10.13.4, the panic went away.
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=11.6 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=11.2 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=10.8 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=11.1 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=10.9 ms
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=15.9 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=15.1 ms
I still find myself wondering about things like the iphones DNS service though. You can’t change it while connected to a cell network and it always seemed strange to me that was considered okay.
Your best bet would be to configure your own DNS server (on your router, for example, assuming support) to use DNS-over-(HTTPS|TLS) and then have all of your other devices use your router as their DNS server.
> DNS resolvers inherently can't use a catchy domain because they are what have to be queried in order to figure out the IP address of a domain. It's a chicken and egg problem. And, if we wanted the service to be of help in times of crisis like the attempted Turkish coup, we needed something easy enough to remember and spraypaint on walls.
In fact, people wrote that DNS address on walls just to get away with the censorship of the government so you wouldn't be helping the government..
I'm guessing less "sophisticated reinvention" and usage of existing TLS connection technology. You can use both (even at the same time) with DNScrypt-proxy V2 https://github.com/jedisct1/dnscrypt-proxy ;)
1) These requests are all in the clear, so your isp can read them and see which hosts you're asking for. VPNs provide better privacy (assuming you choose a private one).
2) yup
3) often. You can set it on your computer, but some public WiFi systems will block it.
In Bendigo Australia, and Steve Gibson's DNSBenchmark tells me that of my 50 optimised resolvers (with 1.1.1.1 and 9.9.9.9 added), the two fastest public DNS services I should use are 9.9.9.9 followed by 8.8.8.8. I also add a couple of others for redundancy..
It's very likely that they have 1.1.1.1 in their "bogons" list and have for a very long time.
Bogons are a list of prefixes that most ISPs blackhole as there is usually never any legitimate traffic bound for those destinations. RFC1918 addresses, for example.
I can't reach 1.1.1.1 either, but 1.0.0.1 works fine. Maybe try that.
Could it be that 1.1.1.1 is blocked because it uses a secure protocol as opposed to 1.0.0.1 which is unsecure. This would be for the sake of monitoring traffic.
If it's not the 5268AC, please let marty at cloudflare dot com know as well. According to a reply on NANOG, he is interested in knowing about other broken CPE.
>"And we wanted to put our money where our mouth was, so we committed to retaining KPMG, the well-respected auditing firm, to audit our code and practices annually and publish a public report confirming we're doing what we said we would."
It's worth pointing out that KPMG was Wells Fargo's independent auditor while the bank recently committed fraud on a massive scale by creating more than a million fake deposit accounts and 560,000 credit card applications for customers without their knowledge or approval.[1]
Calling KPMG a "well-respected auditing firm" when they failed to detect over a million fake bank accounts is a joke.
See:
KPMG was also implicated in the massive South African "state capture" scandal involving the (now fugitive) Gupta family and former president Jacob Zuma.
Among other things, KPMG issued a-later withdrawn-report that was used to undermine the well-respected finance minister, so that a more malleable person could be installed, while also auditing the Guptas during their worst excesses.
Lest we choose to dismiss this as crimes in an insignificant country, KPMG SA has been part of the worldwide group since the 70's, and South Africa's supposedly high auditing standards were a source of national pride.
The story seems to have gone dead after some senior leaders fell on their swords, but six months ago, there was serious talk about the firm being shut down in South Africa.
FT had a lot of coverage, if you're looking for a non-South African source (linking behind the paywall is probably not going to work, but you can Google for it).
I've worked with KPMG subsidiary for security audit. This is an E&Y kind of company, where you pay x4 to work with the least competent people because you need a familiar name stamped on some report.
KPMG has earned a few nickname acronyms because of this in Germany: "Keiner Prüft Mehr Genau" or "Kinder Prüfen Meine Gesellschaft" ("no one audits carefully anymore" and "children audit my company" respectively).
We have a few former KPMG employees. They have many stories to tell, about everything from glass ceilings to harassment.
All in all, KPMG is still well respected (so is E&Y and smaller firms).
We regularly receive government grants, and the best audit experiences I've had was with the small, EU-funded auditors. They have a high level of integrity and technical/financial knowledge. But that is a very specific niche.
Auditors are never "independent". They work for someone. If that someone is government or a client, great. If the auditor works for management, maybe OK for finding employee malfeasance, but no good for management malfeasance.
And of course, like tests, no audit can prove correctness, only can find flaws.
Speaking as a former KPMG employee who did infosec, the financial audit and controls people are far removed from anyone with technical skill in this domain. It may be cold comfort, but these kinds of special purpose attestations may as well be done by a different company (insert BearingPoint joke here).
We have had to supply information to KPMG “IT Auditors” at a client due to some software we wrote.
In most cases the auditors are young grads who have never worked in an actual IT/software dev team. So they have very naive view and never ask the right questions. If one wanted to hide something it would be super easy.
Audits provide reasonable assurance, not total. When auditors test access controls for a homegrown application for example, it is unreasonable to ask that a full code review is done to check 100% that checking the box next to Admin confers that, and that checking Read Only restricts it always. In my experiences performing these tests (as a young grad who had never worked on a software dev team), we would ask what the permissions were designed to provide and limit, and observe in the system that they did that. If a developer had programmed a backdoor that when you press A+B+3 and whisper into a microphone grants unlogged admin access, our test would miss that. But that's why we also test change controls and who has access to push to live, etc.
Edit - and to speak more to the topic at hand, there were plenty of people at the firm I worked with who absolutely had the technical expertise to perform such an in depth audit. They are simply engaged when higher levels of assurance are required. What level of scrutiny should your auditors provide your bathroom time monitoring system?
Definitely worth pointing out, but I don't take issue with their wording. KPMG has a worldwide presence and is an incredibly popular auditing firm. It's certainly possible for KPMG to be a "well-respected auditing firm" in the public's perception and for them to fail to detect all unethical practices during an audit.
While hiring them doesn't prove that Cloudflare's code and practices are sound, it does reduce the risk that they aren't.
As genuine as your question is, there are no good answers. The way we ended up with a Big Four is that the Fifth member of the Big Five (Arthur Andersen) audited Enron, essentially telling everybody that it wasn't an enormous fraud, but it was. All the senior people at AA avoided jail but the audit firm was so obviously untrustworthy it folded. But that doesn't mean the other Four are fine, it just means the "Too Big To Fail" problem is far worse for audit firms than for banking. If we took down one of the Big Four it would probably tank the whole world economy, and they know that, which is Not Good.
Many privacy activists believe that the best proof of a no-logging assertion is for a court to order a provider to turn over logs and for the company to be unable to do so.
Isn't the court system mostly powered by the threat of serious jail time if you're found to be lying, and penalties for your lawyers, too?
If you say "We don't have those logs," and you swear to it and a lawyer puts their name on the filing, it's not like Judge Alsup will start pentesting your company to find the one employee who accidentally has Dropbox pointed at an sftp mount of some production server.
Not that I really want to defend KPMG here, and this is obviously entirely anecdotal, but my team had our application code assessed by them (by request of the customer, so they could get some pointers on what kind of development they needed to focus on). I spent 2 days talking to them, answering questions, showing them data flows, database layouts, system diagrams, etc. They also required access to our source control (making the "let's remove this before the audit" idea pretty useless), issue tracker, etc.
The 2 people that I was in contact with were both competent and experienced. Definitely not "young grads who have never worked in an actual IT/software dev team" as someone claimed elsewhere.
> to audit our code and practices annually and publish a public report confirming we're doing what we said we would
Some exec to developer: Hey John, KPMG wrote to us that they will be here on friday to make an audit, lets just remove those 10 lines that <do whatever that you don't want to be shown in audit> until audit finishes.
I don't want to imply anything about Cloudflare here, just a comment about how useful that kind of private audits are generally.
That's just it, it's not verifiable. Proving something by letting one audit doesn't change that. It's similar when companies get certified by ISO9001 or 270001, it doesn't prove much.
Publishing the full source code could help a little bit, but not much; one doesn't know what code is actually running.
> the bank recently committed fraud on a massive scale by creating more than a million fake deposit accounts and 560,000 credit card applications for customers without their knowledge or approval.
Suppose you were a Wells Fargo depositor and a Wells Fargo teller opened a fake account in your name without consulting you. What harm did you suffer?
How massive is this fraud if you measure it in a more useful way than "number of accounts"?
>"Suppose you were a Wells Fargo depositor and a Wells Fargo teller opened a fake account in your name without consulting you. What harm did you suffer?"
Are you joking? The fake accounts were set up in order to bilk customers out of money in the form of overdrafts fees and penalties.
"Some customers noticed the deception when they were charged unexpected fees, received credit or debit cards in the mail that they did not request, or started hearing from debt collectors about accounts they did not recognize. But most of the sham accounts went unnoticed, as employees would routinely close them shortly after opening them. Wells has agreed to refund about $2.6 million in fees that may have been inappropriately charged."[1]
It also probably impossible to quantify the time customers lost having to deal this. But I think it safe to say it was significant.
>"How massive is this fraud if you measure it in a more useful way than "number of accounts"
OK lets use dollar amounts as a metric - $2.6 million dollars in fees, levied against your own customers? And considering Well Fargo found an additional 1.4 million previously undisclosed fake accounts as recently as August[2] and that the regulatory probe has now widened beyond their retail banking unit and not includes their private wealth division I would say pretty fucking massive.
It's really interesting that you seek to trivialize the scope and severity of a story you seem to know so very little about.
I do know about this story. The purpose of the fake accounts was to meet sales quotas. Fees earned for the bank were accidental and usually nonexistent, for the obvious reason that if you charge your unwitting customer money, they are much more likely to realize they have an account with you.
>"Fees earned for the bank were accidental and usually nonexistent,"
"Approximately 85,000 of the accounts opened incurred fees, totaling $2 million. Customers' credit scores were also likely hurt by the fake accounts.[43] The bank was able to prevent customers from pursuing legal action as the opening of an account mandated customers enter into private arbitration with the bank."
"The bank paid $110 million to consumers who had accounts opened in their names without permission in March 2017." The money repaid fraudulent fees and paid damages to those affected."[1]
That's 85,000 of what you call "non-existent" fees totaling 2 million dollars. And whether or not those were secondary effects of the fraud is completely immaterial.
It's a rather bizarre position to want to defend a bank that not only defrauded its customers but has also admitted to doing so. But you are entitled to that. What you aren't entitled to however is your own alternative facts.
I'm pretty confident that when 85,000 out of "more than a million" accounts earn fees, it's fair to say that fees are "usually nonexistent". You're talking about accounts that Wells Fargo didn't want and fees that it assessed by mistake. By a normal analysis, that wouldn't be a scandal of any kind, and it would call for no more than returning the accidental fees, without a 55x punitive damages award.
> "The bank was able to prevent customers from pursuing legal action as the opening of an account mandated customers enter into private arbitration with the bank."
That's really not going to work if the customer didn't intend to open the account. The fact that (by your numbers) average damages among those who were damaged at all were up to $23.50 may have had more to do with lack of legal action by customers.
The arbitration clause is an overarching thing. The customer agrees to it when they legitimately open an account. It covers the entire banking relationship between that customer and the bank. Which is why Wells was able to use it to prevent litigation from their existing customer over the fraudulent accounts.
>It's worth pointing out that KPMG was Wells Fargo's independent auditor while the bank recently committed fraud on a massive scale by creating more than a million fake deposit accounts and 560,000 credit card applications for customers without their knowledge or approval.[1]
Why is it worth point out? Please detail the work you've done in establilshing that KPMG had access to the data and willfully ignored it.
Thats like saying Linux is a useless project because of giant security holes that stay hidden for decades. I prefer to live in the real world, which is a lot more nuanced, and my question still stands.
>because the Linux project isn't dedicated to auditing the Linux project.
Huh? Code Review? Testing? The entire point of open source especially w.r.t security is to have millions of eyes on the source. Heck with the entire world being able to audit and review the source code, people still find bugs that were introduced decades ago.
>It's like calling a home security system pointless if it doesn't detect any forced entries.
I'm afraid that didn't make much sense to me.
Anyway, why are we focusing on irrelevant minutia or language anyway. I simply asked a commentor to show the work they've done for basing their opinion.
>Heck with the entire world being able to audit and review the source code
That's irrelevant when we are talking about a company being paid specifically to audit something. The entire world is able to send me food as well, but I don't get mad when it doesn't except for when I pay someone to do it.
>I simply asked a commentor to show the work they've done
And it was a dumb question. An auditing company that failed to detect massive fraud either willfully ignored it to sellout or was too incompetent to recognize it.
>That's irrelevant when we are talking about a company being paid specifically to audit something. The entire world is able to send me food as well, but I don't get mad when it doesn't except for when I pay someone to do it.
Linux is developed almost exclusively by people who get paid for their work. Billions of dollars of real money has been poured by IBM, Intel, RH, etc. You are thoroughly confused my friend. Lets stick with the original point.
> An auditing company that failed to detect massive fraud
either willfully ignored it to sellout or was too incompetent to recognize it.
So explain how they audited the firm, explain which data they had access to and how they were incompetent
You can't define your way out of providing evidence. An auditor does X. They couldn't do X, therefore they were incompetent. That schoolyard logic doesn't work. People will ask you to backup your opinion. Its completely fine to say I don't know...
>Linux is developed almost exclusively by people who get paid for their work. Billions of dollars of real money has been poured by IBM, Intel, RH, etc. You are thoroughly confused my friend. Lets stick with the original point.
What aren't you getting? Developing is not auditing. KPMG wasn't paid to do banking, they were just paid to audit.
>So explain how they audited the firm, explain which data they had access to and how they were incompetent
As an auditing firm you either demand enough data to do a real audit or you walk away from the deal. So either they didn't have enough data or they were sell-outs rubber stamping it. That's just how auditing works.
>People will ask you to backup your opinion. Its completely fine to say I don't know...
It's not an opinion. It's literally what they are paid to do. If I pay for a hamburger and someone just gives me a pile of sand, any bystander can tell that the seller didn't do their job.
Sorry, don't want to waste my time any further. Its obvious to me that you have zero actual evidence, and no knowledge of what was done, or what was overlooked. Goodbye.
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=53 time=188.730 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=53 time=178.453 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=53 time=179.869 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=53 time=177.808 ms
Google :
PING 8.8.8.8 (8.8.8.8): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 8.8.8.8: icmp_seq=1 ttl=42 time=58.368 ms
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
64 bytes from 8.8.8.8: icmp_seq=5 ttl=42 time=51.636 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=42 time=55.772 ms
Request timeout for icmp_seq 7
64 bytes from 8.8.8.8: icmp_seq=8 ttl=42 time=42.365 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=42 time=45.782 ms
APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.
We talked to the APNIC team about how we wanted to create a privacy-first, extremely fast DNS system. They thought it was a laudable goal. We offered Cloudflare's network to receive and study the garbage traffic in exchange for being able to offer a DNS resolver on the memorable IPs. And, with that, 1.1.1.1 was born
1.1.1.1 is a partnership between Cloudflare and APNIC.
Cloudflare runs one of the world’s largest, fastest networks. APNIC is a non-profit organization managing IP address allocation for the Asia Pacific and Oceania regions.
Cloudflare had the network. APNIC had the IP address (1.1.1.1). Both of us were motivated by a mission to help build a better Internet. You can read more about each organization’s motivations on our respective posts: Cloudflare Blog / APNIC Blog.
Not relevant, but I just realized that I've been ignoring this post for hours and I think that it's because the "1.1.1.1" threw me off. I wonder if anyone else has experienced this?
This looks good, but I assume that any DNS request I make is still routed through my ISP. Therefore, I assume there is no way to stop my ISP from keeping a log of every URL I visit. Is that correct?
ISP will be aware of all traffic to your IP, but consider that most people have their DNS set to use their ISPs, meaning the ISP easily sees this information in logs. Some people use Google DNS or another provider to bypass the ISP's DNS, which is a step better.
Now Cloudflare is providing a very fast and privacy-driven DNS, so to me this is a step up from others (Quad9, OpenDNS being formidable alternatives)
Say you're on a public WIFI and don't want DNS queries from your machine, there's also DNS-over-HTTPS (which Cloudflare and a couple others support) which doesn't use the DNS protocol and would make a POST request to say, https://1.1.1.1/.well-known/dns-query instead.
Also with HTTPS, ISPs won't see the full URL, just that a secure connection was made to that domain.
From someone that takes DNS for granted every day, can someone shed some light on why the current state of DNS has been called archaic and needs to be replaced with something better?
It's all plain-text over UDP. This is easily exploited for various purposes: spoofing (DDoS attacks), surveillance (such as by ISPs), hijacking/tampering, censorship, privacy concerns, and so on.
As everything else relies on DNS, the DNS must also be secure.
Are there replacement options being worked on? What about wrapping each request and unwrapping on the other end. Something like how Tor wraps requests in many layers?
How was Cloudflare able to get a wildcard certificate with IP Address SANs added to it? How do I obtain one from DigiCert because I don't see the option on their site.
I understand that, and I've had to use the IPv6 address since Comcast is null routing 1.1.1.1 in my area, but that doesn't explain how a wildcard certificate was issued with IP addresses in the SAN.
Am I able to buy one for my own website? If so, how? If not, why not? I couldn't even get past the DigiCert cert selection page since a wildcard cert can't have SANs, and a SAN cert can't contain a wildcard. The only thing I haven't tried yet is supplying my own CSR.
Change DNS in Win10: Control Panel - Network and Internet - Network Connections - <adapter> (right click) - Properties - Internet Protocol Version 4 (double click)
Verizon still the fastest, my ISP, but switched to 1.1.1.1 for the perceived privacy benefit. The speed difference wouldn't be noticeable for me between Verizon, Google, Cloudflare.
I'm fine with nitpicking. Let me try and be clear: We're not logging IPs. We inherently receive them when they connect to the service, but we don't write them to disk and flush them quickly (i.e., seconds or minutes). We're not logging hashes of IPs. We're not logging ASNs of the IPs connecting to the service. We do log the other parts of a DNS query in order to help prevent abuse and debug issues. However, we've committed to wiping these logs within 24 hours. We have no interest in doing anything to deanonymize users. We have a great business based in large part around making the Internet more private and secure. Logically: we would never sacrifice that great business to get into a crappy data sharing service.
One edit: team corrected me that we do log ASNs in some cases in order to debug issues with networks that may have trouble connecting or have been blocked.
No. I mean most businesses that are based on sharing data. They are low margin and not very interesting. I was thinking about businesses like Axicom when I wrote the comment.
Have a ton of respect for David Ulevitch and the whole OpenDNS team. While OpenDNS started with an ad-supported business model, they've completely pivoted away from that. Now that they're part of Cisco, I believe their nearly exclusive revenue stream today is their Umbrella product which is a network security product aimed at businesses. While I don't know for sure, I'd be highly surprised if OpenDNS were selling browsing data.
"I can't be arsed to pay $3/mo for a VPS that I can tunnel my DNS requests through, so I'm gonna nitpick on hackernews about a company trying their best to offer it to /everyone/ for free"
So, you're implying things here that I'll address with an H. L. Mencken quote,
>"The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all."
Universal free speech is not laudable, it's suicidal. If your free speech doesn't protect you from those who want to take it away, they will win, on a long enough time horizon. They only need to win once.
Wow. I can't tell if you're trying to be funny by being meta or you just don't realize what you just said applies to your very argument. Lets break it down.
You want to protect free speech by taking it away because if you don't then someone might use free speech to take away free speech.
First, speech is not an action that can violate your rights. Sticks and stones, etc. And no, just because communication can help organize your political opposition does not mean the speech itself is violating your rights. Actions and legislation do that.
Second, deciding that some things are allowed and some aren't and then enforcing those arbitrary decisions through violence by the state certainly can violate those rights. And and gets easier and more every time.
I suppose you think that limited free speech is a thing that can persist. I strongly disagree. The idea of universe free speech is because any attempt to regulate leads to the loss of all of it fairly quickly if not instantly; they only need to win once. It exists to protect opinions that are disliked by most if not all.
I see your argument is basically that if free speech allows for speech that supports the idea of not allowing free speech then it will fail. And that may be true. That's why constant villigance is required even, especially, when they try to use people who's opinion almost everyone hates to justify it. There is no final solution.
It's a private organization with no monopoly and lots of competition. Free speech doesn't apply here.
Also Cloudflare gets vastly more negative opinions that they don't check enough and serve too many unsavory sites so it seems there's no way to win with the HN crowd.
It set the precedent that they do filtering. It is now being used in legal cases against Cloudflare by companies suing them to force them to filter other things.
Any censorship immediately leads to massive censorship even if they don't want to expand it. That's why it has to be stopped at the start; not done at all. Dumb pipe or censorship pipe.
No business is completely a dumb pipe, the DMCA provisions are very specific and are increasingly overruled once enough (copyrighted) content is in place.
Cloudflare also specifically removed that site for a stated reason that they claimed CF was helping them. That is outside the bounds of the site content itself and is a perfectly fine argument to stop doing business based on libel and misrepresentation.
Well, the post sort of implies that they log everything for 24 hours, but instead of raw IP addresses they log hashed ones, as they still need to identify everyone. Which, sadly, doesn't affect tracking practices at all.
Unfortunately, nitpicking is quite necessary. Haven't we seen enough instances of corporations lying through omission? Where is the trend that indicates we should give a more favorable, trustworthy reading to terms and promises like these? I don't see it.
Cloudflare is a for-profit corporation--you know, "duty to shareholders" and all that. We must assume, almost by definition, that they actually have their own self-interests at heart.
> the only data is domain name, record and the incoming ip
Other data that can be logged:
- timestamp - this can be very revealing when correlated with other datasets.j
- ASN - can sometimes act like fingerprint on it's own, and assists in correlating other data (e.g. the timestamp)
- any identifiable variation in the structure or behavior between different DNS resolver implementations. See nmap's "-O" option that detects the OS from the TCP/IP protocol implementation.
Fair point and (maybe) you are right, I am nitpicking but not ashamed to do so. Could have been stronger to say "We won't store your data" rather than "We won't sell your data". And frankly, "we will never log your IP address (the way other companies identify you)", like really? Talking very naively, what if they just store a hashcode or some other derivative of the IP instead, is that counted as logging the IP? And what about the timestamp, geoIP, reverse hostname and other factors, can deep intelligence be used to associate with other behavior?
It never was perfectly valid. That blog post is incorrect, and network engineers are perfectly fine arguing against that practice. The IP address 1.1.1.1 was reserved by APNIC and now belongs to the APNIC and Cloudflare research project.
Assigning an IP address you don't own on a local network usually means that you cut off access to the actual owner of that address. You might not (immediately) notice it because you don't need to access anything that's located there. But it will set you up for unpleasant surprises in the future when your users (or yourself) want to access a resource that happens to be located there.
RFC 1918 <https://tools.ietf.org/html/rfc1918> provides explicit IP ranges you should use for private resources (10.x.x.x, ~172.16.x.x, 192.168.x.x), which are not routed over the Internet and where your organization is responsible to avoid IP address conflicts.
As that article mentions, it wasn't "perfectly valid" even back then, it just didn't hurt. If I understand the specific implementation mentioned there correctly, it'll still work if the interception is done right (only catching DHCP and redirecting it to where it should go, leaving everything else untouched)
From the article: The only question that remained was when to launch the new service? This is the first consumer product Cloudflare has ever launched, so we wanted to reach a wider audience. At the same time, we're geeks at heart. 1.1.1.1 has 4 1s. So it seemed clear that 4/1 (April 1st) was the date we needed to launch it.
Apparently, you need to provide it as a Subject Alternative Name (SAN).
This is the entry for the cert used:
DNS Name=*.cloudflare-dns.com
IP Address=1.1.1.1
IP Address=1.0.0.1
DNS Name=cloudflare-dns.com
IP Address=2606:4700:4700:0000:0000:0000:0000:1111
IP Address=2606:4700:4700:0000:0000:0000:0000:1001
SAN is the only correct way to write any kind of name for servers on the Internet in a certificate. The "Common Name" was left as a compatibility feature like 20 years ago when SANs were invented and then it rusted into place, but is no longer examined by current Firefox or Chrome browsers for "real" certificates from the public Internet. Chrome shipped releases for a while with a bug where they'd complain the server's cert had the wrong "Common Name" when actually they never checked CN at all, and so it might even have the right Common Name, but they really meant "Your SANs don't match fool" and hadn't updated the error text.
Because crappy software (looking at you here OpenSSL) makes writing SANs into a Certificate Subject Request way harder than it needs to be, a lot of CAs (including Let's Encrypt) will take a CSR that says "My Common Name is foo.example" and sigh, and issue a cert which adds SAN dnsName foo.example, because they know that's what you want. Really somebody should fix the software, one of these days.
In older Windows versions, SChannel (Microsoft's implementation of SSL/TLS) doesn't understand ipAddress, and thinks the correct way to match an ipAddress against a certificate is to turn the address into ASCII text of dotted decimals and compare that to the dnsName entries. This, unsurprisingly, is not standards compliant.
It's good to see a CA not trying to fudge this, but the consequence is probably that if you have older Windows (XP? Maybe even something newer) these certs don't check out as valid for the site. Eh. Upgrade already.
>"4.2.1.6. Subject Alternative Name
The subject alternative name extension allows identities to be bound to the subject of the certificate. These identities may be included in addition to or in place of the identity in the subject field of the certificate. Defined options include an Internet electronic mail address, a DNS name, an IP address, and a Uniform Resource Identifier(URI). Other options exist, including completely local definitions."[1]
Talking about the distribution of traffic over Tor is very different than the people who use it. Cloudflare built Privacy Pass with the specific intent of allowing people who use Tor to have a better experience: https://blog.cloudflare.com/cloudflare-supports-privacy-pass...
Artificially limiting the choice of browsers is not really something that should be honored. But I thank you for this insight - I did not know this link.
Nope, certificates can be, and sometimes are, issued for plain IP addresses, yes including in the Web PKI ("proper" certificates that work in common web browsers).
Because the BRs say that the subject Common Name, if present (which it usually will be for really crappy software that still doesn't implement standards from _last god-damn century_) must be chosen from the list of SANs, these certificates will have an IP address as their CN, plus an ipAddress SAN.
Here is an example, which my records say had an IP address as its only name, but at time of writing crt.sh is timing out for me so forgive me if this some completely unrelated cert and I've pasted the wrong one:
I more curious from a practical perspective, the FAQ appers to cliam DNScrypt gives you most/all of what the others do, with easy setup.
The caveat that a "good amount of servers support the protocol" isn't very clear, how many is a "good amount"? Does that hold true now? Unsupported servers appear to fall back to traditional DNS resolution, oer the diagram; is this not the case with the HTTP/TLS implementations?
EDIT: Looks like this might be an issue w/ my AT&T-provided CPE, sorry! (more details at the bottom)
From my vantage point, 1.1.1.1 is inaccessible, while 1.0.0.1 seems to work just fine.
Comments on the blog post blame this on "various reasons" but, at least in my case, this seems to be a Cloudflare issue:
$ ping -c 5 -q 1.0.0.1
PING 1.0.0.1 (1.0.0.1) 56(84) bytes of data.
--- 1.0.0.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 34.955/35.737/37.492/0.936 ms
$ ping -c 5 -q 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
--- 1.1.1.1 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4102ms
$ traceroute 1.0.0.1
traceroute to 1.0.0.1 (1.0.0.1), 30 hops max, 60 byte packets
[...]
3 * * *
4 12.83.79.61 (12.83.79.61) 28.126 ms 28.663 ms 29.110 ms
5 cgcil403igs.ip.att.net (12.122.132.121) 35.854 ms 37.532 ms 37.510 ms
6 ae16.cr7-chi1.ip4.gtt.net (173.241.128.29) 33.997 ms 29.083 ms 29.647 ms
7 xe-0-0-0.cr1-det1.ip4.gtt.net (89.149.128.74) 37.758 ms 35.165 ms 36.620 ms
8 cloudflare-gw.cr0-det1.ip4.gtt.net (69.174.23.26) 36.946 ms 37.343 ms 38.574 ms
9 1dot1dot1dot1.cloudflare-dns.com (1.0.0.1) 38.385 ms 36.621 ms 37.157 ms
$ traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 60 byte packets
[...]
3 * * *
4 12.83.79.61 (12.83.79.61) 30.388 ms 12.83.79.41 (12.83.79.41) 30.601 ms 31.280 ms
5 cgcil403igs.ip.att.net (12.122.132.121) 37.602 ms 37.873 ms 37.808 ms
6 ae16.cr7-chi1.ip4.gtt.net (173.241.128.29) 33.441 ms 29.788 ms 29.678 ms
7 xe-0-0-0.cr1-det1.ip4.gtt.net (89.149.128.74) 35.266 ms 35.124 ms 33.921 ms
8 cloudflare-gw.cr0-det1.ip4.gtt.net (69.174.23.26) 35.294 ms 35.949 ms 35.455 ms
9 * * *
10 * * *
11 * * *
12 *^C
----
EDIT: I have AT&T-provided CPE that I have to use due to 802.1X. If I log into the device (over HTTP) and use the built-in (web-based) diagnostics tools, I am able to successfully ping 1.1.1.1 from the device itself:
ping successful: icmp seq:0, time=2.364 ms
ping successful: icmp seq:1, time=1.085 ms
ping successful: icmp seq:2, time=1.160 ms
ping successful: icmp seq:3, time=1.245 ms
ping successful: icmp seq:4, time=0.739 ms
These RTTs are way too low, however. The RTT for a ping to the CPE's next-hop/default gateway comes in at, minimum, ~20 ms.
When pinging 1.1.1.1 from my (pfSense-based) router sitting directly behind the modem, however, no replies come back from the modem to the router (confirmed via pcap on the upstream-facing interface).
Thus, it looks like this is an issue with the AT&T CPE (5268AC).
I have ATT and seeing the same issues, but my tracert is different.
tracert 1.1.1.1
Tracing route to 1dot1dot1dot1.cloudflare-dns.com [1.1.1.1]
over a maximum of 30 hops:
1 1 ms 1 ms 1 ms 1dot1dot1dot1.cloudflare-dns.com [1.1.1.1]
tracert 1.0.0.1
Tracing route to 1dot1dot1dot1.cloudflare-dns.com [1.0.0.1]
over a maximum of 30 hops:
1 3 ms <1 ms <1 ms 192.168.1.254
2 48 ms 18 ms 34 ms 99-153-196-1.lightspeed.stlsmo.sbcglobal.net [99.153.196.1]
3 19 ms 17 ms 17 ms 64.148.120.125
4 29 ms 24 ms 18 ms 71.144.225.112
5 19 ms 18 ms 18 ms 71.144.224.85
6 19 ms 18 ms 19 ms 12.83.40.161
7 26 ms 27 ms 26 ms cgcil403igs.ip.att.net
[12.122.132.121]
8 27 ms 24 ms 28 ms ae16.cr7-chi1.ip4.gtt.net [173.241.128.29]
9 32 ms 31 ms 31 ms xe-0-0-0.cr1-det1.ip4.gtt.net [89.149.128.74]
10 31 ms 31 ms 31 ms cloudflare-gw.cr0-det1.ip4.gtt.net [69.174.23.26]
11 31 ms 31 ms 35 ms 1dot1dot1dot1.cloudflare-dns.com [1.0.0.1]
In a browser, 1.1.1.1 comes back as connection refused. 1.0.0.1 loads.
> When pinging 1.1.1.1 from my (pfSense-based) router sitting directly behind the modem, however, no replies come back from the modem to the router (confirmed via pcap on the upstream-facing interface).
Your upstream diagnosis seems to suggest otherwise, but perhaps you have an issue with using pfBlockerNG? If you're using pfSense with pfBlockerNG + DNSBL IP rules, it populates empty firewall alias files with 1.1.1.1 which was falsely assumed to be unused.
Review your aliases and pfBlockerNG alerts. If you see it dropped there, disable the firewall rule option on DNSBL, see screenshot [0]
Additional brief discussion on reddit [1] with comments from the pfBlockerNG author.
> Through the project we protect groups like LGBTQ organizations targeted in the Middle East, journalists covering political corruption in Africa, human rights workers in Asia, and bloggers on the ground covering the conflict in Crimea.
And in occident? Do they protect MRAs and Christians?
I love how their view of political targeting is limited to what the West wants to impose to all countries. Yet, the organization “A Voice For Men” was flagged as hate speech for funding the movie The Red Pill (2016), the most censored movie of 2017 in occident. If they haven’t identified them as political oppression victims, they don’t know much about Free Speech.
The idea is solid on the surface, but I don't trust it's parent. Setting up hundreds of millions of internet machines to be reliant on a single corporation's service offerings is asking for disaster, and Cloudflare has a sleeeeaazy history.
But hey, they say their product is legitimate, so it must be true.
I'm pretty sure that CloudFlare has people working with US intelligence and supply information that cant be used in court, requiring parallel reconstruction..
I am not allowed to share that information. I now work for a Infosec/Intel company. I've worked on IBM/Watson's days systems, and before that I worked at another Intel Agency. I have terribly worked with Packet Forensics, FBI, Secret Service, and yes... Cloudflare.
He asked for evidence, not more unverifiable claims.
I'm not a huge fan of Cloudflare and do not use any of their services but you can't just go around making shit up and then refuse to back up your claims.
This is addressed with commitments and third party audits assuring 24h retention for IP logs. There are other conversations in this thread surrounding the viability of those audits, but that's somewhat of a tangential debate.
This doesn't answer whether or not cloudflare will be able to protect against someone intercepting their traffic and recording dns lookups independently, but that's a problem for any dns provider.
Especially since they advertise being actively involved in certain political movements and promise to defend those ones. That's a scarily biased attitude for a DNS. How does someone's sexuality have anything to do with IP address lookups?
This is bad, bad, bad advice. You don't set the DNS on your local machine. That breaks things. The DNS needs to be set at the gateway. If you change your PC/mac's DNS to an external service, you won't be able to resolve any addresses on the local network.
Come on, CloudFlare. You guys know better than that. Please stop breaking the (local) internet.
Ordinary users don't have anything that resolves to local IPs, so this is a non-issue for just about anybody. Plus, many if not most ISP-provided modem-router-AP-boxes don't let you configure the DNS server they use, making your recommendation impossible to follow for most users. Someone who runs services on their local network likely knows enough to do as you say, but for 99% of people, these instructions are exactly what they need.
This is bad. To run your own local DNS server is a part of good parenting. So, to break local services is very bad for us responsible parents, to say the least. I block all outbound DNS lookup except to my ISP. Sometime I redirect lookups to other resolvers (eg. 8.8.8.8) to my local DNS server. I don’t care if some app breaks because of this. Often it’s because of bad programming. So, don’t break local DNS!
Yes! People are smart enough to handle most things. But they don't have time or attention to handle all the things. When we're making technology for users, we should do our best to make sure they only have to learn about the things that are important to them.
This is useful for use cases for which that doesn't matter. Using your computer or devices at home, on your own wifi, where there is no need to resolve local addresses. Or on public wifi, such as in a café, where there is no need to resolve local addresses, and you don't control the gateway.
You're not typical of the average consumer, though. Don't forget that HN is a particularly technical crowd, so you can't use it to judge how technically competent Internet users are.
Perhaps you missed the sections near the top titled "DNS's Privacy Problem" and "DNS's Censorship Problem" which explain why not everybody can trust their network operator?
DNS needs to be moved to a blockchain system yesterday.
After currency, it's close to being the second killer app for blockchain.
Anything else, as in anything centralized, will be vulnerable to random state actor censorship, be they China, the Google, USG, Turkey or any other deplorables and is therefore broken.
Namecoin was an early attempt at that (almost as old as bitcoin), but it came in too early.