Hacker News new | comments | show | ask | jobs | submit login
1.1.1.1: Fast, privacy-first consumer DNS service (cloudflare.com)
1895 points by satysin 5 months ago | hide | past | web | favorite | 657 comments



And look at these ping times:

                                   CloudFlare       Google DNS       Quad9            OpenDNS          
  NewYork                            2 msec           1 msec           2 msec           19 msec          
  Toronto                            2 msec           28 msec          17 msec          27 msec          
  Atlanta                            1 msec           2 msec           1 msec           19 msec          
  Dallas                             1 msec           9 msec           1 msec           7 msec           
  San Francisco                      3 msec           21 msec          15 msec          20 msec          
  London                             1 msec           12 msec          1 msec           14 msec          
  Amsterdam                          2 msec           6 msec           1 msec           6 msec           
  Frankfurt                          1 msec           9 msec           2 msec           9 msec           
  Tokyo                              2 msec           2 msec           81 msec          77 msec          
  Singapore                          2 msec           2 msec           1 msec           189 msec         
  Sydney                             1 msec           130 msec         1 msec           165 msec

Very impressive CloudFlare.


Where are you testing from? I'm going to guess: a datacenter. Residential customers won't see anything this fast. I'm in a small town in Kansas, connected by 1 Gbit ATT fiber. I'm getting ~26ms to 1.1.1.1 and ~19ms to my private DNS resolver that I host in a datacenter in Dallas. Google DNS comes in around 19ms.

I suspect that Cloudflare and Google DNS both have POPs in Dallas, which accounts for the similar numbers to my private resolver. My point is, low latencies to datacenter-located resolver clients is great but the advantage is reduced when consumer internet users have to go across their ISP's long private fiber hauls to get to a POP. Once you're at the exchange point, it doesn't really matter which provider you choose. Go with the one with the least censorship, best security, and most privacy. For me, that's the one I run myself.

Side note: I wish AT&T was better about peering outside of their major transit POPs and better about building smaller POPs in regional hubs. For me, that would be Kansas City. Tons of big ISPs and content providers peer in KC but AT&T skips them all and appears to backhaul all Kansas traffic to DFW before doing any peering.


Ping from University of Rochester, over wifi:

Cloudflare:

  64 bytes from 1.1.1.1: icmp_seq=0 ttl=128 time=2 ms
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=128 time=2 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=128 time=2 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=128 time=9 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=128 time=2 ms
Google:

  64 bytes from 8.8.8.8: icmp_seq=0 ttl=54 time=12 ms
  64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=11 ms
  64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=13 ms
  64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=45 ms
  64 bytes from 8.8.8.8: icmp_seq=4 ttl=54 time=14 ms
  64 bytes from 8.8.8.8: icmp_seq=5 ttl=54 time=11 ms
  64 bytes from 8.8.8.8: icmp_seq=6 ttl=54 time=34 ms
Quad9:

  64 bytes from 9.9.9.9: icmp_seq=0 ttl=53 time=10 ms
  64 bytes from 9.9.9.9: icmp_seq=1 ttl=53 time=69 ms
  64 bytes from 9.9.9.9: icmp_seq=2 ttl=53 time=14 ms
  64 bytes from 9.9.9.9: icmp_seq=3 ttl=53 time=58 ms
  64 bytes from 9.9.9.9: icmp_seq=4 ttl=53 time=52 ms
One thing I noticed is that when I first pinged 1.1.1.1 I got 14ms, which then quickly dropped to ~3ms consistently:

  64 bytes from 1.1.1.1: icmp_seq=0 ttl=128 time=14 ms
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=128 time=14 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=128 time=2 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=128 time=3 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=128 time=1 ms
  64 bytes from 1.1.1.1: icmp_seq=5 ttl=128 time=4 ms


Beijing:

  PING 1.1.1.1 (1.1.1.1): 56 data bytes
  64 bytes from 1.1.1.1: icmp_seq=0 ttl=52 time=241.529 ms
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=52 time=318.034 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=52 time=337.291 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=52 time=255.748 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=52 time=247.765 ms
  64 bytes from 1.1.1.1: icmp_seq=5 ttl=52 time=235.611 ms
  64 bytes from 1.1.1.1: icmp_seq=6 ttl=52 time=239.427 ms
  64 bytes from 1.1.1.1: icmp_seq=7 ttl=52 time=247.911 ms
  64 bytes from 1.1.1.1: icmp_seq=8 ttl=52 time=260.911 ms
  64 bytes from 1.1.1.1: icmp_seq=9 ttl=52 time=281.153 ms
  64 bytes from 1.1.1.1: icmp_seq=10 ttl=52 time=300.363 ms
  64 bytes from 1.1.1.1: icmp_seq=11 ttl=52 time=234.296 ms


Hangzhou:

    $ ping 1.1.1.1
    PING 1.1.1.1 (1.1.1.1): 56 data bytes
    Request timeout for icmp_seq 0
    Request timeout for icmp_seq 1
    Request timeout for icmp_seq 2
    Request timeout for icmp_seq 3
    Request timeout for icmp_seq 4
    Request timeout for icmp_seq 5
    Request timeout for icmp_seq 6
    Request timeout for icmp_seq 7
    Request timeout for icmp_seq 8
    Request timeout for icmp_seq 9
    Request timeout for icmp_seq 10

    $ ping 1.0.0.1
    PING 1.0.0.1 (1.0.0.1): 56 data bytes
    64 bytes from 1.0.0.1: icmp_seq=0 ttl=50 time=167.359 ms
    64 bytes from 1.0.0.1: icmp_seq=1 ttl=50 time=165.791 ms
    64 bytes from 1.0.0.1: icmp_seq=2 ttl=50 time=165.846 ms
    64 bytes from 1.0.0.1: icmp_seq=3 ttl=50 time=166.755 ms
    64 bytes from 1.0.0.1: icmp_seq=4 ttl=50 time=166.694 ms
    64 bytes from 1.0.0.1: icmp_seq=5 ttl=50 time=166.088 ms
    64 bytes from 1.0.0.1: icmp_seq=6 ttl=50 time=166.460 ms
    64 bytes from 1.0.0.1: icmp_seq=7 ttl=50 time=166.668 ms
    64 bytes from 1.0.0.1: icmp_seq=8 ttl=50 time=166.753 ms
    64 bytes from 1.0.0.1: icmp_seq=9 ttl=50 time=165.670 ms
    64 bytes from 1.0.0.1: icmp_seq=10 ttl=50 time=166.816 ms
Seem not China friendly :-(


Australia :(

  64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=17.580 ms
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=18.025 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=17.780 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=18.231 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=17.906 ms
  64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=18.447 ms


Cambodia - crappy office wifi

  PING 1.1.1.1 (1.1.1.1): 56 data bytes
  64 bytes from 1.1.1.1: icmp_seq=0 ttl=59 time=22.806 ms
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=23.321 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=24.379 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=25.869 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=24.485 ms
  64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=24.165 ms

  PING 8.8.8.8 (8.8.8.8): 56 data bytes
  64 bytes from 8.8.8.8: icmp_seq=0 ttl=57 time=23.005 ms
  64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=22.867 ms
  64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=24.461 ms
  64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=23.680 ms
  64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=35.581 ms
  64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=21.033 ms
  64 bytes from 8.8.8.8: icmp_seq=6 ttl=57 time=41.634 ms


Johannesburg, South Africa. 100mb/s home fibre:

  ping 1.1.1.1
  PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=1.36 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=1.32 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=1.34 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=1.38 ms
  64 bytes from 1.1.1.1: icmp_seq=5 ttl=58 time=1.37 ms

  ping 8.8.8.8
  PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
  64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=1.33 ms
  64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=1.38 ms
  64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=1.35 ms
  64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=1.36 ms
  64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=1.35 ms


Melbourne, Australia :)

   PING 1.1.1.1 (1.1.1.1): 56 data bytes
   64 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=5.044 ms
   64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=6.447 ms
   64 bytes from 1.1.1.1: icmp_seq=2 ttl=60 time=6.371 ms
   64 bytes from 1.1.1.1: icmp_seq=3 ttl=60 time=6.308 ms
   64 bytes from 1.1.1.1: icmp_seq=4 ttl=60 time=7.317 ms
   64 bytes from 1.1.1.1: icmp_seq=5 ttl=60 time=5.989 ms


Woah! That's pretty good. Mine was on Belong NBN in Brisbane.


Interesting that they're announcing 1.1.1.1 in Australia, while their CDN traffic still goes via Hong Kong


Dubai: PING 1.1.1.1 (1.1.1.1): 56 data bytes 64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=48.728 ms 64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=48.450 ms 64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=47.266 ms 64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=45.320 ms 64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=46.470 ms


Copenhagen:

  PING 1.1.1.1 (1.1.1.1): 56 data bytes
  64 bytes from 1.1.1.1: icmp_seq=0 ttl=55 time=14.053 ms
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=12.715 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=55 time=13.615 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=55 time=14.018 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=55 time=12.261 ms
  64 bytes from 1.1.1.1: icmp_seq=5 ttl=55 time=11.428 ms
  64 bytes from 1.1.1.1: icmp_seq=6 ttl=55 time=11.950 ms
  64 bytes from 1.1.1.1: icmp_seq=7 ttl=55 time=13.034 ms
  64 bytes from 1.1.1.1: icmp_seq=8 ttl=55 time=13.679 ms
  64 bytes from 1.1.1.1: icmp_seq=9 ttl=55 time=12.415 ms
  64 bytes from 1.1.1.1: icmp_seq=10 ttl=55 time=12.088 ms


Pinging 1.1.1.1 with 32 bytes of data: Reply from 89.228.6.1: Destination net unreachable. Reply from 89.228.6.1: Destination net unreachable. Reply from 89.228.6.1: Destination net unreachable. Reply from 89.228.6.1: Destination net unreachable.

Any idea why my ISP redirects this IP?


Maybe an advertisement re-direct for NXDOMAINS?


PING 1.1.1.1 (1.1.1.1): 56 data bytes 64 bytes from 1.1.1.1: icmp_seq=0 ttl=61 time=15.860 ms 64 bytes from 1.1.1.1: icmp_seq=1 ttl=61 time=15.799 ms 64 bytes from 1.1.1.1: icmp_seq=2 ttl=61 time=15.616 ms 64 bytes from 1.1.1.1: icmp_seq=3 ttl=61 time=15.769 ms 64 bytes from 1.1.1.1: icmp_seq=4 ttl=61 time=15.431 ms 64 bytes from 1.1.1.1: icmp_seq=5 ttl=61 time=16.459 ms 64 bytes from 1.1.1.1: icmp_seq=6 ttl=61 time=15.860 ms 64 bytes from 1.1.1.1: icmp_seq=7 ttl=61 time=15.930 ms


Tokyo, domestic 2Gbps FO but connected through Wifi:

    PING 1.1.1.1 (1.1.1.1): 56 data bytes
    64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=5.531 ms
    64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=4.420 ms
    64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=5.450 ms
    64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=5.438 ms
    64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=4.231 ms
    64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=5.933 ms



    PING 8.8.8.8 (8.8.8.8): 56 data bytes
    64 bytes from 8.8.8.8: icmp_seq=0 ttl=57 time=6.440 ms
    64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=4.574 ms
    64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=4.684 ms
    64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=4.992 ms
    64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=5.942 ms
    64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=5.955 ms


From Tokyo, Japan:

$ ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1): 56 data bytes 64 bytes from 1.1.1.1: icmp_seq=0 ttl=58 time=111.781 ms 64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=102.982 ms 64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=102.206 ms 64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=110.135 ms 64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=110.085 ms

$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: icmp_seq=0 ttl=58 time=6.886 ms 64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=5.475 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=58 time=5.674 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=58 time=5.557 ms 64 bytes from 8.8.8.8: icmp_seq=4 ttl=58 time=7.066 ms

$ ping 9.9.9.9 PING 9.9.9.9 (9.9.9.9): 56 data bytes 64 bytes from 9.9.9.9: icmp_seq=0 ttl=58 time=5.880 ms 64 bytes from 9.9.9.9: icmp_seq=1 ttl=58 time=5.534 ms 64 bytes from 9.9.9.9: icmp_seq=2 ttl=58 time=5.251 ms 64 bytes from 9.9.9.9: icmp_seq=3 ttl=58 time=5.194 ms 64 bytes from 9.9.9.9: icmp_seq=4 ttl=58 time=5.698 ms


Something interesting I saw pointed out on the reddit thread about this is the ttl between 1.1.1.1 and 8.8.8.8 is the ttl is way different.

Your pings also have the same thing showing up 128 vs 53. I tried on my laptop and get something simmilar. traceroute to 1.1.1.1 is 1 hop which is wrong. 1.0.0.1 shows a few hops.

`dig google.com @1.1.1.1` doesn't work for me.


It could be a technique they use to filter out all the junk traffic.


It might be your isp caching the DNS in a local data center after you first request it


There is no DNS involved when you're connecting directly to an IP address


Unless you tell it not to, ping will try a reverse lookup on the IP you are pinging in order to display that to you in the output. It's a good idea to keep that in mind when you ping something, especially if you notice the first ping is abnormally slow.


That reverse lookup time is not counted in the first ping.


Perhaps that depends on operating system. In the 30 years I have been using ping on Linux, the reverse lookup time is absolutely included in the first ping time.


If true, that's a bug.

Edit: Assuming this is the right file: https://github.com/iputils/iputils/blob/master/ping.c, I don't see the reverse lookup code anywhere. But then I'm not the most proficient in reading linux code.


I think AT&T's fiber modems are using 1.1.1.1. I'm getting < 1ms ping times and according to Cloudflare's website there's no data center close enough to me for that to be possible without violating the speed of light.


what happens if you go to https://1.1.1.1 in a browser? It should have a valid TLS cert and have a big banner that says, among other things, "Introducing 1.1.1.1". If your ISP's CPE or anything else is fucking with traffic to that IP, it wont load/display that


I just get connection refused.


Call your ISP and ask them why they're blocking access to some websites. Ask them if there are any other websites they're blocking. Tweet about it. Etc


I'm getting this on Comcast in Knoxville. https://1.0.0.1 works fine, and https://1.1.1.1 works on my phone if I turn off wifi.


Here's what I'm seeing.

https://i.imgur.com/piisG5D.jpg


Comcast in Northern NJ USA about 45 MI from NYC

  $ ping 1.1.1.1
  PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=10.8 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=11.3 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=10.7 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=10.9 ms

  PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
  64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=10.7 ms
  64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=11.3 ms
  64 bytes from 8.8.8.8: icmp_seq=3 ttl=60 time=11.1 ms
  64 bytes from 8.8.8.8: icmp_seq=4 ttl=60 time=10.5 ms


From a residential connection in New Zealand:

    $ ping 1.1.1.1

    Pinging 1.1.1.1 with 32 bytes of data:
    Reply from 1.1.1.1: bytes=32 time=4ms TTL=60
    Reply from 1.1.1.1: bytes=32 time=4ms TTL=60
    Reply from 1.1.1.1: bytes=32 time=4ms TTL=60
    Reply from 1.1.1.1: bytes=32 time=4ms TTL=60

    $ ping 8.8.8.8

    Pinging 8.8.8.8 with 32 bytes of data:
    Reply from 8.8.8.8: bytes=32 time=27ms TTL=60
    Reply from 8.8.8.8: bytes=32 time=27ms TTL=60
    Reply from 8.8.8.8: bytes=32 time=27ms TTL=60
    Reply from 8.8.8.8: bytes=32 time=28ms TTL=60
Seems that 1.1.1.1 is even faster than my local ISP's primary DNS:

    $ ping 202.180.64.10

    Pinging 202.180.64.10 with 32 bytes of data:
    Reply from 202.180.64.10: bytes=32 time=11ms TTL=61
    Reply from 202.180.64.10: bytes=32 time=11ms TTL=61
    Reply from 202.180.64.10: bytes=32 time=11ms TTL=61
    Reply from 202.180.64.10: bytes=32 time=11ms TTL=61


Fastest Bigpipe residential connection available in the middle of Auckland:

  $ ping -c 4 1.1.1.1

  PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=29.0 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=27.7 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=30.5 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=28.6 ms
  
  --- 1.1.1.1 ping statistics ---
  4 packets transmitted, 4 received, 0% packet loss, time 3004ms
  rtt min/avg/max/mdev = 27.731/28.993/30.573/1.028 ms

  $ ping -c 4 8.8.8.8

  PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
  64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=27.7 ms
  64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=30.7 ms
  64 bytes from 8.8.8.8: icmp_seq=3 ttl=55 time=28.5 ms
  64 bytes from 8.8.8.8: icmp_seq=4 ttl=55 time=30.6 ms

  --- 8.8.8.8 ping statistics ---
  4 packets transmitted, 4 received, 0% packet loss, time 3005ms
  rtt min/avg/max/mdev = 27.772/29.409/30.710/1.280 ms
I'm starting to feel I should change ISPs...


On WiFi in Cambridge NZ

  PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=7.65 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=8.53 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=10.2 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=8.04 ms
  64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=7.92 ms
  64 bytes from 1.1.1.1: icmp_seq=6 ttl=59 time=7.85 ms
  64 bytes from 1.1.1.1: icmp_seq=7 ttl=59 time=7.88 ms
  64 bytes from 1.1.1.1: icmp_seq=8 ttl=59 time=7.73 ms
  64 bytes from 1.1.1.1: icmp_seq=9 ttl=59 time=7.73 ms


BigPipe, Spark, Skinny and Vodafone don't believe in peering and thus don't peer with Cloudflare at APE. If you wanted the best performance then 2degrees, Orcon, Voyager or Slingshot are the best for this since they peer.


Vodafone have come to the party and are on AKL-IX now.


Residential in Auckland, NZ (Vibe, UFB)

64 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=0.966 ms

Outstanding.

64 bytes from 8.8.8.8: icmp_seq=0 ttl=59 time=25.478 ms

Not so great.


Four! I'm getting 14 from fibre in Wellington. Google are 35 ish.


If you are on ethernet, I am able to get 1-2ms pings. On same AT&T Fiber Gigabit. Wifi ruins both bandwidth and latency for me.


AT&T Fiber Gigabit in Nashville TN.

    iMac   ~ ping 1.1.1.1
    PING 1.1.1.1 (1.1.1.1): 56 data bytes
    64 bytes from 1.1.1.1: icmp_seq=0 ttl=64 time=0.688 ms
    64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=0.814 ms
    64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=1.153 ms
    64 bytes from 1.1.1.1: icmp_seq=3 ttl=64 time=0.752 ms
    64 bytes from 1.1.1.1: icmp_seq=4 ttl=64 time=0.755 ms
    64 bytes from 1.1.1.1: icmp_seq=5 ttl=64 time=0.789 ms
    64 bytes from 1.1.1.1: icmp_seq=6 ttl=64 time=0.876 ms
    64 bytes from 1.1.1.1: icmp_seq=7 ttl=64 time=0.869 ms
    64 bytes from 1.1.1.1: icmp_seq=8 ttl=64 time=0.830 ms
    64 bytes from 1.1.1.1: icmp_seq=9 ttl=64 time=1.387 ms
    --- 1.1.1.1 ping statistics ---
    10 packets transmitted, 10 packets received, 0.0% packet loss
    round-trip min/avg/max/stddev = 0.688/0.891/1.387/0.204 ms
Pinging 8.8.8.8 averages 8ms. CloudFlare must have a POP here in Nashville?


That's probably because AT&T is using 1.1.1.1 for something internal and breaking the public internet for it's users: you get a really fast ping on 1.1.1.1, but it's not the 1.1.1.1 you are trying to reach.


Is this just speculation or can anybody confirm?

    traceroute to 1.1.1.1 (1.1.1.1), 64 hops max, 52 byte packets
     1  1dot1dot1dot1.cloudflare-dns.com (1.1.1.1)  1.117 ms  0.710 ms  0.727 ms


Seems AT&T uses 1.1.1.1 inside of their modems. Oops!

Using 1.0.0.1 works.


Given that they're a CDN, I would expect them to. I'm jealous that BNA has AT&T peering but Kansas City has minimal/no peering.


haha, I knew that was you when I read Nashville, nodesocket


You should invest in some better wifi gear, it sounds like!

On a Unifi nano hd, with moderate signal, my latency only goes up 1ms.

Getting ~3.5 ms on wifi to 1.1.1.1, ~2.5ms ethernet


That's impressive. My AT&T wifi router caps bandwidth at 300mb/s (instead of 1gbs on ethernet) and add 10-20 ms to latency. And this is standing next to it and using 5ghz.


Man, wish I could ever get pings this low - the link from my VDSL2 model to the local CenturyLink CO alone is 8-15ms depending on the day.

Sucks that VDSL2 no longer supports fastpath, not that I could use it on an ADSL line due to bonding anyway :/


Out of curiosity, what is your complete Unifi / network setup?


GW/Firewall: USG-XG Switches: 2x US-16-XG, 1x US-48, 2x US-8 APs: 2x Nano HD, 2x AC Pro


I'm on Ethernet and fiber all the way. This may have to do more with how AT&T has constructed their fiber in this region. Where do you live?

https://chrissnell.com/hn/traceroute-1.1.1.1.png


How did you get that beautiful traceroute output?



Austin, TX.


HangZhou:

  Pinging 1.1.1.1 with 32 bytes of data:
  Reply from 1.1.1.1: bytes=32 time=1ms TTL=128
  Reply from 1.1.1.1: bytes=32 time=1ms TTL=128
  Reply from 1.1.1.1: bytes=32 time=1ms TTL=128
  Reply from 1.1.1.1: bytes=32 time=2ms TTL=128

  Pinging 8.8.8.8 with 32 bytes of data:
  Reply from 8.8.8.8: bytes=32 time=91ms TTL=37
  Request timed out.
  Reply from 8.8.8.8: bytes=32 time=66ms TTL=37
  Request timed out.

  Pinging 1.0.0.1 with 32 bytes of data:
  Reply from 1.0.0.1: bytes=32 time=146ms TTL=50
  Reply from 1.0.0.1: bytes=32 time=144ms TTL=50
  Reply from 1.0.0.1: bytes=32 time=142ms TTL=50
  Reply from 1.0.0.1: bytes=32 time=140ms TTL=50


> Residential customers won't see anything this fast.

The standard Comcast black-box router/modem I have has a mean ping of ~9ms, and a min of ~3ms, so yeah, I'd have to agree.

(I get ~28ms to 1.1.1.1.)


I’m getting similar ping times from my Digital Ocean droplet in one of their NYC data centers where my website is hosted:

    PING 1.1.1.1 (1.1.1.1): 56 data bytes

    --- 1.1.1.1 ping statistics ---
    10 packets transmitted, 10 packets received, 0.0% packet loss
    round-trip min/avg/max/stddev = 1.335/1.431/1.517/0.053 ms


I'm in Mexico:

1.1.1.1 60 ms

8.8.8.8 20 ms


Small village next to a provincial town in Europe on Cable: getting 11ms avg.


from Lima, Peru

PING 1.0.0.1: 64 data bytes

--- 1.0.0.1 ping statistics ---

14 packets transmitted, 14 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 120.784/126.222/128.433/2.036 ms

1.1.1.1 timed out, must be blocked by my iso.


Keep in mind that ping time isn't the only factor in DNS lookup speed. For me (sonic.net in Palo Alto):

ping 1.1.1.1: ~22ms

ping 8.8.8.8: ~19ms

dig @1.1.1.1: ~45ms

dig @8.8.8.8: ~70ms

Disclaimer: Eyeballed averages over a few samples. A more rigorous test of DNS lookup times would be cool to see.

Disclosure: I work for Cloudflare, but not on DNS.


I'm guessing Google's resolvers are a little busier than Cloudflare's right now, because pretty much nobody not on HN right now is hitting them. Will be a more interesting comparison in 6 months.


I'd be surprised if increased load has a negative effect on 1.1.1.1's performance.

We run a homogeneous architecture -- that is, every machine in our fleet is capable of handling every type of request. The same machines that currently handle 10% of all HTTP requests on the internet, and handle authoritative DNS for our customers, and serve the DNS F root server, are now handling recursive DNS at 1.1.1.1. These machines are not sitting idle. Moreover, this means that all of these services are drawing from the same pool of resources, which is, obviously, enormous. This service will scale easily to any plausible level of demand.

In fact, in this kind of architecture, a little-used service is actually likely to be penalized in terms of performance because it's spread so thin that it loses cache efficiency (for all kinds of caches -- CPU cache, DNS cache, etc.). More load should actually make it faster, as long as there is capacity, and there is a lot of capacity.

Meanwhile, Cloudflare is rapidly adding new locations -- 31 new locations in March alone, bringing the current total to 151. This not only adds capacity for running the service, but reduces the distance to the closest service location.

In the past I worked at Google. I don't know specifically how their DNS resolver works, but my guess is that it is backed by a small set of dedicated containers scheduled via Borg, since that's how Google does things. To be fair, they have way too many services to run them all on every machine. That said, they're pretty good at scheduling more instances as needed to cover load, so they should be fine too.

In all likelihood, what really makes the difference is the design of the storage layer. But I don't know the storage layer details for either Google's or Cloudflare's resolvers so I won't speculate on that.


> In fact, in this kind of architecture, a little-used service is actually likely to be penalized in terms of performance because it's spread so thin that it loses cache efficiency

This is exactly what I'm seeing with the small amount of testing I'm doing against google to compare vs cloudflare.

Sometimes google will respond in 30ms (cache hit), more often than not it has to do at least a partial lookup (160ms), and sometimes even go further to (400ms.)

The worst I'm encountering on 1.1.1.1 is around 200ms for a cache miss.

Basically, what it looks like is that google is load balancing my queries and I'm getting poor performance because of it - I'm guessing they simply need to kill some of their capacity to see increased cache hits.

Anecdotally I'm at least seeing better performance out of 1.1.1.1 than my ISP's (internode) which has consistently done better than 8.8.8.8 in the past.

Also anecdotally, my short 1-2 month trial of using systemd-resolved is now coming to a failed conclusion, I suspect I'll be going back to my pdnsd setup because it just works better.


So logging accounts for 25ms ;)


how are you pinging 8.8.8.8?

EDIT: nevermind - mistake on my end!


ICMP round-trip times don't necessarily prove anything - you need to be examing DNS resolution times.

Lots of network hardware (i.e., routers, firewalls if they're not outright blocking) de-prioritise ICMP (and other types of network control/testing traffic) and the likelihood is that Google (and other free DNS providers) are throttling the number of ICMP replies that they send.

They're not providing an ICMP reply service, they're providing a DNS service. I'd a situation during the week where I'd to tell one of our engineers to stop tracking 8.8.8.8 as an indicator of network availability for this reason.


Using namebench[0], CloudFlare is about the 6th fastest for me. Just ahead of google.

1) Level3

2) DynGuide

3) UltraDNS

4) OpenDNS

5) Quad9

6) CloudFlare

7) Google

[0] https://code.google.com/archive/p/namebench/


Note, from Google Compute Engine use 8.8.8.8 as it should always be faster. I'm guessing the 8.8.8.8 service exists in every Google Cloud region. Even better use the default GCE autogenered DNS IP that they configure in /etc/resolv.conf to get instance name resolving magic.


Usually best to use 169.254.169.254, which is the magic "cloud metadata address" that talks directly to the local hypervisor (I think?). That will recurse to public DNS as necessary. https://cloud.google.com/compute/docs/internal-dns


I agree that's usually best, but one exception is worth noting: if you want only publicly resolvable results, don't use 169.254.169.254. That address adds convenient predictable hostnames for your project's instances under the .internal TLD.

Also, no need to hardcode that address - DHCP will happily serve it up. It also has the hostname metadata.google.internal and the (disfavored for security reasons) bare short hostname metadata.


How is this possible from a single location? The speed of light in a vacuum is ~200 miles per millisecond.


Despite using a single IP, this is not served from a single location. Check out Anycast, wikipedia: https://en.wikipedia.org/wiki/Anycast


Yup, anycast, this is also why:

The "backup" IPv4 address is 1.0.0.1 rather than, say, 1.1.1.2, and why they needed APNIC's help to make this work

In theory you can tell other network providers "Hi, we want you to route this single special address 1.1.1.1 to us" and that would work. But in practice most of them have a rule which says "The smallest routes we care about are a /24" and 1.1.1.1 on its own is a /32. So what gets done about that is you need to route the entire /24 to make this work, and although you can put other services in that /24 if you _really_ want, they will all get routed together, including failover routing and other practices. So, it's usually best to "waste" an entire /24 on a single anycast service. Anycast is not exactly a cheap homebrew thing, so a /24 isn't _that_ much to use up.


Interestingly having routing problems to China for 1.1.1.1 (but not 1.0.0.1): http://ping.pe/1.1.1.1


Poznań, Poland

    1.1.1.1: ~17ms (the first one took 179ms, but after that it's pretty fast)
    8.8.8.8: ~16ms


From London on a residential ADSL connection:

  8.8.8.8 - ping 7ms dig 14ms
  8.8.4.4 - ping 7ms dig 16ms
  1.1.1.1 - ping 7ms dig 16ms
  1.0.0.1 - ping 6ms dig 15ms
  
  9.9.9.9 - ping 6ms dig 17ms
CF & Google about the same for me. Good to have an alternative in CF though, and certainly a very memorable IP :)


I'm in a city in southern Japan (so most of my traffic needs to go to Tokyo first), on a gigabit fiber connection.

    --- 1.1.1.1 ping statistics ---
    rtt min/avg/max/mdev = 30.507/32.155/36.020/1.419 ms

    --- 8.8.8.8 ping statistics ---
    rtt min/avg/max/mdev = 19.618/21.572/23.009/0.991 ms
The traceroutes are inconclusive but they kind of look like Google has a POP in Fukuoka and CloudFlare are only in Tokyo.

edit: Namebench was broken for me, but running GRC's DNS Benchmark my ISP's own resolver is the fastest, then comes Google 8.8.8.8, then Level3 4.2.2.[123], then OpenDNS, then NTT, and then finally 1.1.1.1.


Pretty sure that google time for Sydney is an outlier

This is from my residential ADSL2 connection in Sydney:

  [Bigs-MacBook-Pro-2:~] bigiain% ping 8.8.8.8
  PING 8.8.8.8 (8.8.8.8): 56 data bytes
  64 bytes from 8.8.8.8: icmp_seq=0 ttl=59 time=21.257 ms
  64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=25.831 ms
  64 bytes from 8.8.8.8: icmp_seq=2 ttl=59 time=22.231 ms
  64 bytes from 8.8.8.8: icmp_seq=3 ttl=59 time=21.498 ms
  ^C
  --- 8.8.8.8 ping statistics ---
  4 packets transmitted, 4 packets received, 0.0% packet loss
  round-trip min/avg/max/stddev = 21.257/22.704/25.831/1.841 ms
  [Bigs-MacBook-Pro-2:~] bigiain% ping 1.1.1.1
  PING 1.1.1.1 (1.1.1.1): 56 data bytes
  64 bytes from 1.1.1.1: icmp_seq=0 ttl=59 time=22.481 ms
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=38.814 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=19.923 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=19.911 ms
  ^C
  --- 1.1.1.1 ping statistics ---
  4 packets transmitted, 4 packets received, 0.0% packet loss
  round-trip min/avg/max/stddev = 19.911/25.282/38.814/7.882 ms
And this is from an ec2 instance is ap-southeast-2:

  ubuntu@ip-172-31-xx-xx:~$ ping 8.8.8.8
  PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
  64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=2.24 ms
  64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=2.27 ms
  64 bytes from 8.8.8.8: icmp_seq=3 ttl=55 time=2.30 ms
  64 bytes from 8.8.8.8: icmp_seq=4 ttl=55 time=2.26 ms
  64 bytes from 8.8.8.8: icmp_seq=5 ttl=55 time=2.31 ms
  64 bytes from 8.8.8.8: icmp_seq=6 ttl=55 time=2.25 ms
  ^C
  --- 8.8.8.8 ping statistics ---
  6 packets transmitted, 6 received, 0% packet loss, time 5007ms
  rtt min/avg/max/mdev = 2.244/2.274/2.310/0.066 ms
  ubuntu@ip-172-31-xx-xx:~$ ping 1.1.1.1
  PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=1.03 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=55 time=1.05 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=55 time=1.05 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=55 time=1.01 ms
  64 bytes from 1.1.1.1: icmp_seq=5 ttl=55 time=1.07 ms
  ^C
  --- 1.1.1.1 ping statistics ---
  5 packets transmitted, 5 received, 0% packet loss, time 4004ms
  rtt min/avg/max/mdev = 1.015/1.046/1.076/0.035 ms


From Hyderabad, India

Cloudflare:

Reply from 1.0.0.1: bytes=32 time=119ms TTL=56

Reply from 1.0.0.1: bytes=32 time=74ms TTL=56

Reply from 1.0.0.1: bytes=32 time=74ms TTL=56

Reply from 1.0.0.1: bytes=32 time=74ms TTL=56

Reply from 1.0.0.1: bytes=32 time=74ms TTL=56

GoogleDNS:

Reply from 8.8.8.8: bytes=32 time=44ms TTL=55

Reply from 8.8.8.8: bytes=32 time=43ms TTL=55

Reply from 8.8.8.8: bytes=32 time=43ms TTL=55

Reply from 8.8.8.8: bytes=32 time=43ms TTL=55

Reply from 8.8.8.8: bytes=32 time=44ms TTL=55


From Hyderabad, another ISP

Pinging 1.1.1.1 with 32 bytes of data: Reply from 1.1.1.1: bytes=32 time=45ms TTL=53 Reply from 1.1.1.1: bytes=32 time=45ms TTL=53 Reply from 1.1.1.1: bytes=32 time=45ms TTL=53 Reply from 1.1.1.1: bytes=32 time=45ms TTL=53

Ping statistics for 1.1.1.1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 45ms, Maximum = 45ms, Average = 45ms

Pinging 1.0.0.1 with 32 bytes of data: Reply from 1.0.0.1: bytes=32 time=46ms TTL=54 Reply from 1.0.0.1: bytes=32 time=46ms TTL=54 Reply from 1.0.0.1: bytes=32 time=46ms TTL=54 Reply from 1.0.0.1: bytes=32 time=46ms TTL=54

Ping statistics for 1.0.0.1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 46ms, Maximum = 46ms, Average = 46ms

Pinging 8.8.4.4 with 32 bytes of data: Reply from 8.8.4.4: bytes=32 time=29ms TTL=56 Reply from 8.8.4.4: bytes=32 time=29ms TTL=56 Reply from 8.8.4.4: bytes=32 time=29ms TTL=56 Reply from 8.8.4.4: bytes=32 time=29ms TTL=56

Ping statistics for 8.8.4.4: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 29ms, Maximum = 29ms, Average = 29ms

Pinging 8.8.8.8 with 32 bytes of data: Reply from 8.8.8.8: bytes=32 time=21ms TTL=56 Reply from 8.8.8.8: bytes=32 time=21ms TTL=56 Reply from 8.8.8.8: bytes=32 time=21ms TTL=56 Reply from 8.8.8.8: bytes=32 time=21ms TTL=56

Ping statistics for 8.8.8.8: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 21ms, Maximum = 21ms, Average = 21ms

Pinging 208.67.220.220 with 32 bytes of data: Reply from 208.67.220.220: bytes=32 time=45ms TTL=54 Reply from 208.67.220.220: bytes=32 time=46ms TTL=54 Reply from 208.67.220.220: bytes=32 time=45ms TTL=54 Reply from 208.67.220.220: bytes=32 time=50ms TTL=54

Ping statistics for 208.67.220.220: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 45ms, Maximum = 50ms, Average = 46ms

Pinging 208.67.222.222 with 32 bytes of data: Reply from 208.67.222.222: bytes=32 time=61ms TTL=54 Reply from 208.67.222.222: bytes=32 time=61ms TTL=54 Reply from 208.67.222.222: bytes=32 time=61ms TTL=54 Reply from 208.67.222.222: bytes=32 time=61ms TTL=54

Ping statistics for 208.67.222.222: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 61ms, Maximum = 61ms, Average = 61ms


Cafe in Chiang Rai, Thailand:

    $ ping -n 1.1.1.1
    round-trip min/avg/max/stddev = 16.696/18.643/22.571/2.056 ms

    $ ping -n 8.8.8.8
    round-trip min/avg/max/stddev = 38.410/45.663/57.684/8.075 ms


"And look at these ping times ..."

I would be interested to hear from google (8.8.8.8) how much ping traffic that address gets ...

I know that I will quickly ping 8.8.8.8 as a very quick and dirty test of network up ... its just faster to type than any other address I could test with.


It looks like you are testing either from centers where cloudflare has servers or exchanging traffic with, which is likely true in a data center given the traffic it transports. What most users want is the ping time from home/office.


Cape Town, South Africa, Residential ADSL

    1.1.1.1 ~ 26ms
    8.8.8.8 ~ 42ms


Pasadena, CA

1.1.1.1 continually timed out.

1.0.0.1 succeeded

18 packets transmitted, 18 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 10.178/11.128/12.585/0.576 ms


I assume a cable modem adds at least 8ms of latency, because I get 8ms of latency to my default router, and about 12-15ms to any of those hosts.


I live I Greece, Google’s DNS are 20-30% faster.


DNS-over-HTTPS doesn’t make as much sense to me as DNS-over-TLS. They are effectively the same thing, but HTTPS has the added overhead of the HTTP headers per request. If you look at the currently in progress RFC, https://tools.ietf.org/html/draft-ietf-doh-dns-over-https-04, this is quite literally the only difference. The DNS request is encoded as a standard serialized DNS packet.

The article mentions QUIC as being something that might make HTTPS faster than standard TLS. I guess over time DNS servers can start encoding HTTPS requests into JSON, like google’s impl, though there is no spec that I’ve seen yet that actually defines that format.

Can someone explain what the excitement around DNS-over-HTTPS is all about, and why DNS-over-TLS isn’t enough?

EDIT: I should mention that I started implementing this in trust-dns, but after reading the spec became less enthusiastic about it and more interested in finalizing my DNS-over-TLS support in the trust-dns-resolver. The client and server already support TLS, I couldn't bring myself to raise the priority enough to actually complete the HTTPS impl (granted it's not a lot of work, but still, the tests etc, take time).


Some ISPs block outbound DNS from customers to anywhere but their resolvers, filtering based on target port. This is a particularly common trick in countries that attempt to censor the internet.

It's a lot harder to do that with DNS-over-HTTPS because it looks like normal traffic.

That said, in this case ISPs can just null route the IP address of the obvious main resolvers such as 1.1.1.1. I imagine most of the benefit is surely to people who can spin up their own resolvers.


When we add TLS on top of the protocol, ISPs can only filter based on port at that point. We can run DNS on 443 if that helps, but as you said, static well-known IPs can then be blocked.

> I imagine most of the benefit is surely to people who can spin up their own resolvers.

There are already many easily run DNS resolvers available. Is there a benefit you see in operating them over HTTPS that improves on that?


> When we add TLS on top of the protocol, ISPs can only filter based on port at that point.

And SNI… :(


This is really the elephant in the room. For all we know, ISP bad-actors have never cared about DNS for data-collection purposes, and they're already using SNI to gather data to sell to marketers. I think it's absolutely crucial to find a way to (at least optionally) send SNI encrypted to the server.


There's domain fronting, which uses SNI to bypass censorship! :)

https://en.wikipedia.org/wiki/Domain_fronting


If this were to become an issue, I guess Cloudflare could try to disable SNI.


The client sends SNI, so how could the server opt out?


You just solved your own question. Cloudflare creates an opensource client that users install locally.


The client that sends SNI is, AFAIK, the browser or a similar piece of software. Some older browsers don't support SNI so they can only access single-vhost-per-ip over https.

This means you'll have a really hard time trying to get rid of SNI system-wide, what with a lot of minor apps making their own https connections (granted, on Android or iOS they probably use a common API, but not on a computer).


Where's the button to install your own DNS resolver on iOS? Or non-rooted Android, for that matter.


Someone shared this lovely iOS app yesterday:

DNSCloak • DNSCrypt DoH client by Sergey Smirnov https://itunes.apple.com/ca/app/dnscloak-dnscrypt-doh-client...

It supports DNSCrypt, DNSSEC and DNS-over-HTTPS, the IAP are for tips :)

It works via running a VPN server on your device.

To change your normal plaintext DNS resolver just tap the circle-i on your WiFi network.


Well with SNI the concern isn't DNS. Any TLS connection that supports SNI (basically everything that isn't ancient) would have to be fixed. Also, ANI is a pretty useful thing to have and getting rid of it doesn't exactly fix much. Without SNI the server only has the destination IP address to determine which site and thus which certificate to send to the client. Having https sites with multiple certificates hosted on one IP address only works because of SNI. You would break a large portion of the web by disabling it. Also, even if you do disable SNI, the server still sends back the certificate with the domain names in it. And even if you ignore all of that, there's still reverse DNS which will probably be accurate if they send mail from that server and you can always do a DNS lookup for every domain name there is to get a map of which domains point to a given IP. Due to DNS based geolocation that won't work for every site but the sites using that are going to be big enough to find their IP address ranges via another method.

In short, there's really no good solution here but an amendment to TLS could conceivably make it to where it wouldn't be possible to narrow it down to which site that an IP address hosts the user was visiting. That could actually be good enough for traffic to e.g. cloudflare.


Non-rooted android, you have to set a static IP for every network and then there will be an option to enter DNS names. They default to Google DNS Static IP settings are under advanced.


Server could advertise no need to use SNI in advance. Or we could do SNI after actually establishing an encrypted session...


I suppose there is also domain fronting [1], but it won't be fast or an easy-to-remember IP address anymore. And if you need that, you might need a VPN anyway?

[1] https://en.wikipedia.org/wiki/Domain_fronting


It's amazing that governments haven't shut down shared domains to prevent domain-frontong.


1.1.1.1 does support DNS-over-TLS as well: https://developers.cloudflare.com/1.1.1.1/dns-over-tls/


Yes! And I plan to actually build in a default setup to use that now that it exists. I should have mentioned up front that this is the most exciting thing to me in the announcement.

This is a very exciting development, thank you for posting this.


One of the use cases for DNS-over-HTTPS given in the draft was to allow web applications access to DNS directly via existing browser APIs.


I've implemented DNS before. Doing this saves an entire 300 lines of code. At the same time, it makes the DNS server much more complicated. On top of that, implementing a compliant posix libc will now either use a completely different code path, or pull in a huge amount of code to implement HTTP, HTTP/2, and QUIC. If the simpler, cleaner, and more performant route is taken, it willgbreak when someone screws up "legacy" dns without noticing, because it works in the browser.

It's not worth the complexity of multiple protocols that do the same thing. And it's not worth making the base system insanely complicated so that the magic 4 letters 'http' can show up.

TLS? Yeah, since the simpler secure DNSes failed, we might as well use that. But let's try to keep http complexity contained.


Ok that’s actually pretty cool.


Wonder if this will pave the way for other protocols over HTTPS.


Hopefully not. One needs to stop working around crappy setups from crappy networks. Which X-over-HTTPS really is all about.


It seems like crappy networks are the norm nowadays, and the preference of the ISPs is to offer the web only. You need a middle box just to access the internet at-large (e.g Tor). Masquerading traffic as web traffic appears to be a good tactic, though inefficient/sloppy.


Yeah, but once everything is tunneled over HTTP it will finally fix the network operator problem once and for all since you can't filter applications using ports.


Cloudflare addresses this in the blog post:

There are a couple of different approaches. One is DNS-over-TLS. That takes the existing DNS protocol and adds transport layer encryption. Another is DNS-over-HTTPS. It includes security but also all the modern enhancements like supporting other transport layers (e.g., QUIC) and new technologies like server HTTP/2 Server Push. Both DNS-over-TLS and DNS-over-HTTPS are open standards. And, at launch, we've ensured 1.1.1.1 supports both.

We think DNS-over-HTTPS is particularly promising — fast, easier to parse, and encrypted.


Dns over https would be harder for governments and other middleman to block or intercept, despite it being less efficient. It would look like any other https request. Especially if browsers agreed to universally support it.


No it wouldn't. They're both encrypted with the same method so they can't tell whether http is used or not.


tls isn't magic, you can still observe the encrypted stream and make assumptions based on bytes sent/received on the wire, protocol patterns and timing. See the crime and breach attack.


Sorry, confused. Https requests are prolific, while encrypted DNS requests aren't. Why isn't the former less hard to detect?


How would you tell that an encrypted chunk of data is HTTPS instead of DNS? The best you'd be able to do is guess based on behavior that it's DNS.


Destination port might be easy to differentiate dns over tls vs dns over https :)


Perhaps if the attacker filters traffic first by protocol, it's harder but not at all impossible. I'd guess that DNS-over-HTTPS packets won't be hard to identify by other means.


of course dns over https to cloudflare can be mixed on the same h2 connection with other https to the same host. It starts to get interesting.

(this is one of the advantages of https vs straight tls)


rfc 8336. h2 coalescing. h2 push. caching. it starts to add up to a very interesting story.


Thank you for responding, Patrick. As one of the authors of the RFC, your views on this are a great contribution to the conversation.

> rfc 8336

I'll have to read up on this, thanks for the link.

> h2 coalescing

DNS is already capable of using TCP/TLS (and by it's nature UDP) for multiple DNS requests at a time. Is there some additional benefit we get here?

> h2 push

This one is interesting, but DNS already has optimizations built in for things like CNAME and SRV record lookups, where the IP is implicitly resolved when available and sent back with the original request. Is this adding something additional to those optimizations?

> caching

DNS has caching built-in, TTLs on each record. Is there something this is providing over that innate caching built into the protocol?

> it starts to add up to a very interesting story.

I'd love to read about that story, if someone has written something, do you have a link?

Also, a question that occurred to me, are we talking about the actual website you're connecting to being capable of preemptively passing DNS resolution to web clients over the same connection?

Thanks!


this story will evolve as the http ecosystem evolves - but that's part of the point.

wrt coalescing/origin/secondary-certificates its a powerful notion to consider your recursive resolver's ability to serve other http traffic on the same connection. That has implications for anti-censorship and traffic analysis.

Additionally the ability to push DNS information that it anticipates you will need outside the real time moment of an additional record has some interesting properties.

DoH right now is limited to the recursive resolver case. But it does lay the groundwork for other http servers being able to publish some DNS information - that's something that needs some deep security based thinking before it can be allowed, but this is a step towards being compatible with that design.

wrt caching - some apps might want a custom dns cache (as firefox does), but some may simply use an existing http cache for that purpose without having to invent a dns cache. leveraging code is good. There are lots of other little things like that which http brings for free - media type negotiation, proxying, authentication, etc..


> There are lots of other little things like that which http brings for free - media type negotiation, proxying, authentication, etc..

Reading a little between the lines here, would you say that at some point we effectively replace the existing DNS resolution graph with something implemented entirely over http? Where features like forwarding and proxying would have more common off the shelf tooling?

I can start see a picture here that looks to be more about common/shared code, and less about actual features of the underlying protocols.


As a complete layperson, h2 push might be interesting because a DNS resolver could learn to detect patterns in DNS queries (e.g. someone who requests twitter.com usually requests pbs.twimg.com and abs.twimg.com right after) and start to push those automatically when they get the query for twitter.com.


>The article mentions QUIC as being something that might make HTTPS faster than standard TLS.

Even with TLS 1.3 0-RTT?


yes, quic will make dns over https more resillient to packet loss than a tls based approach.


exactly...


How is any of this more secure against your ISP in any case given someone willing to do reverse lookup‘s on IP addresses?

If someone controls routers is it not nearly useless?

So for example all mobile 4g providers could laugh at this and build a nearly as good database of every site you visit?


Reverse DNS is a lot more difficult than just intercepting DNS requests. Especially with virtual hosts, caching proxies and so on.


How much overhead? Is the request or response larger than a single packet?


TIL you can also use 1.1 and it will expand to 1.0.0.1

  $> ping 1.1

  PING 1.1 (1.0.0.1) 56(84) bytes of data.
  64 bytes from 1.0.0.1: icmp_seq=1 ttl=55 time=28.3 ms
  64 bytes from 1.0.0.1: icmp_seq=2 ttl=55 time=33.0 ms
  64 bytes from 1.0.0.1: icmp_seq=3 ttl=55 time=43.6 ms
  64 bytes from 1.0.0.1: icmp_seq=4 ttl=55 time=41.7 ms
  64 bytes from 1.0.0.1: icmp_seq=5 ttl=55 time=56.5 ms
  64 bytes from 1.0.0.1: icmp_seq=6 ttl=55 time=38.4 ms
  64 bytes from 1.0.0.1: icmp_seq=7 ttl=55 time=34.8 ms
  64 bytes from 1.0.0.1: icmp_seq=8 ttl=55 time=45.7 ms
  64 bytes from 1.0.0.1: icmp_seq=9 ttl=55 time=45.2 ms
  64 bytes from 1.0.0.1: icmp_seq=10 ttl=55 time=43.1 ms


The most useful case for this shortcut is 127.1 -> 127.0.0.1


Don't try that in the wild, most sw out there would ignore spec and use some arbitrary regex to validate IP format.

i.e python:

    octets = ip_str.split('.')
    if len(octets) != 4:
        raise AddressValueError("Expected 4 octets in %r" % ip_str)


What spec says that 127.1 and 127.0.0.1 are equivalent?


I don’t actually think it’s in a spec formally but is in a common c lib[0].

> a.b

> Part a specifies the first byte of the binary address. Part b is interpreted as a 24-bit value that defines the rightmost three bytes of the binary address. This notation is suitable for specifying (outmoded) Class C network addresses.

[0]: https://linux.die.net/man/3/inet_aton


The POSIX spec (IEEE 1003.1) says the same thing for inet_addr(), so it does occur in an actual spec.


Thanks. I just came across the man page myself while I was writing this tiny program.

  $ cat 127.1.c
  #include <stdio.h>
  #include <arpa/inet.h>
   
  int main(int argc, char *argv[])
  {
      struct in_addr addr;
   
      if (inet_aton(argv[1], &addr))
          printf("%08x\n", addr.s_addr);
   
      return 0;
  }
  $ make 127.1 CFLAGS=-Wall
  cc -Wall     127.1.c   -o 127.1
  $ ./127.1 1.1
  01000001
  $ ./127.1 127.1
  0100007f


You are right, I faithfully assumed it's a spec without checking. Thanks.


0, which is a shorthand for 0.0.0.0 is likely the most code-golf-y way to write localhost, as many [EDIT: Linux] systems alias 0.0.0.0 to 127.0.0.1:

  $ ping 0
  PING 0 (127.0.0.1) 56(84) bytes of data.
  64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms
Of course, don't expect this to work universally. A lot of software will try to be clever with input validation, and fail.

Tangentially related: https://fosdem.org/2018/schedule/event/email_address_quiz/


It's not fully true that 127.0.0.1 is the same as 0.0.0.0. For example, binding a webserver to 0.0.0.0 make it on the public network while 127.0.0.1 is strictly localhost.


0.0.0.0 is not localhost. It's "any address".


Yes, you're right.

What I was trying to say is - On Linux, INADDR_ANY (0.0.0.0) supplied to connect() or sendto() calls is treated as a synonym for INADDR_LOOPBACK (127.0.0.1) address.

Not so for bind() or course.


Stays unaliased on macOS:

My-MacBook-Pro:bottle mrkstu$ ping 0 PING 0 (0.0.0.0): 56 data bytes ping: sendto: No route to host


However ping to 127.1 works the same as localhost.


Where have you been all my life?


Sitting at 127.1, apparently.


You can also use the decimal value of the IP, without the dots: https://16843009


Hex works too: https://0x1010101


Sadly, binary / octal don't work: https://0b1000000010000000100000001 / https://0o100200401


Octal works, with the older 0-prefix convention: https://0100200401


Ah, I had completely forgotten about that. Thanks!


You can also sing that number to the tune of the famous 8675309 song with very little robato.


If you want to memorize the integer, it's not a bad mechanic to use... Why do you hate me?


I just upvoted you, bc (a) funny and (b) TIL a new word (rubato).



  1.2 -> 1.0.0.2
  1.2.3 -> 1.2.0.3
But then, much of software would fail here - Firefox/Chrome for example would both threat that as bareword and redirect to search page.


It work as expected if you give it the http://1.2.3 schema prefix.

The input bar is a search bar in modern browsers.


Or if you follow it with a trailing slash, for less typing

  1.1/


Or if you prefix it with //

  //1.1.1.1
It's one more letter than a suffix, but as a prefix its a bit clearer. I've known companies to post LAN hostname addresses that way, and in written/printed materials it stands out pretty clearly as an address to type.

It follows the URL standards (no schema implies current or default schema). Many auto-linking tools (such as a Markdown, Word) recognize it by default (though sometimes results are unpredictable given schema assumptions). It's also increasingly the recommendation for HTML resources where you do want to help insure same-schema requests (good example cross-server/CDN CSS and JS links now are typically written as //css-host.example.com/some/css/file.css).


I wish that they talked a bit more about their stance regarding censorship. They have a small paragraph talking about the problem, but they don't talk about the "solution".

While Cloudflare has been pretty neutral about censoring sites in the past (notably, pirate sites), the Daily Stormer incident put them in a though spot[1].

They talk a bit about Project Galileo (the link is broken BTW, it should be https://www.cloudflare.com/galileo), but their examples do not mention topics that would be controversial in western societies, and the site is quite vague. Would they also protect sites like sci-hub, for example?

While I would rather use a DNS not owned by Google, I have never seen any site blocked by them, including sites with a nation-wide block. I hope that Cloudflare is able to do the same thing.

1: https://torrentfreak.com/cloudflare-doesnt-want-daily-storme...


There's a pretty big difference between terminating a business relationship (which is what Cloudflare did to Daily Stormer, and which Google also did a couple days before Cloudflare did) and refusing to answer DNS queries for third-party domains with which there is no business relationship. It's hard to imagine how the former could be used as precedent to compel the latter.

Cloudflare has no interest in censorship -- the whole reason the Daily Stormer thing was such a big deal was because it's the only time Cloudflare has ever terminated a customer for objectionable content. Be sure to read the blog post to understand: https://blog.cloudflare.com/why-we-terminated-daily-stormer/

(Disclosure: I work for Cloudflare but I'm not in a position to set policy.)


I probably should have made a clearer point instead of linking to TorrentFreak.

I did not mean that I was worried that CloudFlare's DNS would start blocking sites whose content they disagree with (although that would also be worrisome).

I'm worried that copyright holders might be able to use the Daily Stormer case as a precedent to force CloudFlare to stop offering services to infringing sites.

If they are able to do that, I can also see them attempting to force CloudFlare to remove DNS entries as well.


Right, as I said, it's hard for me to see how one could be used as precedent for the other given how different the situations are. And if you could use it, you could just as easily do the same against Google DNS.

I'm not a lawyer, though.


Bear in mind, they dropped Daily Stormer because they were claiming Cloudflare agreed with their ideology. Which someone in the previous discussion pointed out was a Terms of Service violation.

DNS resolving offers no such terms and no such reason to make such a claim. I don't see that playing here. And bear in mind, when the CEO did it, he wrote about how dangerous it was that companies had that power. I don't feel other companies running other DNS services hold that level of concern or awareness.

When you consider that their "competitor" in the space of free DNS resolvers with easy-to-remember IPs is Google, who recently tried blocking the word "gun" in Google Shopping... it's hard not to see the introduction of a Cloudflare DNS resolver as at least a net positive for resisting censorship. And more options is almost always better.


Cloudflare is a private company and they're free to do what they want but their reasoning for the Daily Stormer termination felt like a convenient excuse to me. I'm sure that it was the best business decision for them but when I read a blog post touting 1.1.1.1 as being anti-censorship, I roll my eyes.

Anti-censorship so long as Matthew Prince doesn't have a bad morning.

I run my own DNS-over-TLS resolver at a trusted hosting provider. It upstreams to a selection of roots for which I have reasonable trust. My resolver does DNS-over-TLS, DNS-over-HTTPS, and plain DNS. Multiple listening ports for the secure stuff so that I have something that works for most circumstances.


I would still take someone who can have a bad morning and decide to censor one site (and then write about how concerning that power is), over entities that regularly view it as their "responsibility" to shut down sites and remove content they find objectionable.

I think it's great if people are running their own DNS. :) But I'm certainly not mad that Cloudflare's offering yet another public alternative. As I said, more choices is better.


Running your own root content DNS server isn't particularly hard, note. The public root content DNS server operators are not interested in serving up dummy answers for all sorts of internal stuff that leaks out to the root content DNS servers any more than you are interested in sending it to them. (-:


>because they were claiming Cloudflare agreed with their ideology.

That was a lie. It was a commenter on an article.


My tendency would be to ask for some sort of proof, though I realize asking for proof of nonexistence of evidence is near impossible. I'm inclined at present to place more trust in Cloudflare's word at this point, but I try to keep an open mind. It's always good to know both sides' stories.


Well, you have the CloudFlare blog where Prince states "The tipping point for us making this decision was that the team behind Daily Stormer made the claim that we were secretly supporters of their ideology."[0] So, all that is necessary is to find this statement. I won't link to it but the Daily Stormer has been active on the clear web for most of the time intervening the seizure of their domain and now. Prince never provided any proof for his claim, not even a screenshot. Of course, a screenshot would have given away, via the visual context, that the statement wasn't from the "team" but from a forum commenter presenting the notion in a joking manner.

As it happens, an internal memo "leaked" to the media wherein Prince admitted he pulled the plug on The Daily Stormer because they are "assholes" and admitted that “The Daily Stormer site was bragging on their bulletin boards about how Cloudflare was one of them."[1] These forums are also what served as the area for readers to comment on articles. Ergo, he acknowledged that he knew his statement about the Daily Stormer "team" claiming CloudFlare supported their ideology was a lie.

You also have to go back in time and consider the context in which The Daily Stormer was successively de-platformed. The site had been publishing low-brow racist commentary including jokes about pushing Jews into ovens and referring to Africans as various simian species for years. It was, however, a single article wherein they mocked the woman who died at the Charlottesville, VA conflict between the alt-right and antifa that led to the widespread outrage that resulted in the The Daily Stormer being temporarily kicked off the internet.[2]

At the same time that Cloudflare was banning the Daily Stormer, they were (and still are, AFAIK) providing services to pro-pedophilia and ISIS web sites. The Daily Stormer itself pointed out not only the hypocrisy of this situation but also the risk it created to CloudFlare's continued safe harbor protections.[3]

[0]: https://blog.cloudflare.com/why-we-terminated-daily-stormer/ [1]: https://gizmodo.com/cloudflare-ceo-on-terminating-service-to... [2]: https://www.independent.co.uk/life-style/gadgets-and-tech/da... [3]: https://web.archive.org/web/20180401233331/https://dailystor...


You seem to know an awful lot about this specific case, and I'll defer to you on that. I know about the general case, technically speaking (though merely a DNS hobbyist).

However, having a business relationship with another organization is not a right. Hate speakers are not a protected class.

DNS does not operate in the same manner nor with the same assumptions. One can obviously run their own DNS resolver as has been pointed out repeatedly in this thread.

Please list the, "pro-pedophilia and ISIS web sites." hosted by Cloudflare?

Edit: There's probably a business opportunity for a registrar/DNS provider/host that operates under 'free speech purism,' though it's hard to say it won't go the way of usenet in that regard.


>Please list the, "pro-pedophilia and ISIS web sites." hosted by Cloudflare?

It's in the linked archived DS article and I confirmed the information is still true.


Actually, they have already suspended the service for sci-hub, albeit under a court order.

https://yro.slashdot.org/story/18/02/05/1944225/cloudflare-t...


The Galileo link works for me. It's worth pointing out Google at the very least censors as easily as Cloudflare [1].

My understanding of Cloudflare's policies though are with the exception of exceptionally objectionable content, Cloudflare only takes sites down in response to a court order. I don't know if it has been established that DNS is something which operators have a proactive obligation to censor, but I imagine it's the kind of thing Cloudflare would go to court over.

1- https://www.vox.com/policy-and-politics/2017/8/14/16143820/g...


"I wish that they talked a bit more about their stance regarding censorship. They have a small paragraph talking about the problem, but they don't talk about the "solution"."

I think there's a good way to put this to the test - establish a DNS "mixer" that will randomly direct DNS requests to either 1.1.1.1 or 8.8.8.8 or (whatever) and let the public have access to it.

In this way, Cloudflare would bear some small expense from processing these DNS requests (essentially zero) but would receive no information about the initial requestor.

It would be interesting to run this experiment and perhaps see some real traffic on the DNS mixer ... and then see how cloudflare responds.

Would they block the mixer ?


For the Cloudflare folks hanging around:

Please, please, please add some basic "features" (like Google does) that will help when troubleshooting resolution!

For example, the following will show the unicast IP address of the server you're hitting when using 8.8.8.8:

  $ dig @8.8.8.8 txt o-o.myaddr.l.google.com. +short
Additionally, with one other DNS query, we can get a list of what netblocks are being used (for Google Public DNS) in what datacenters/locations:

  $ dig @8.8.8.8 txt locations.publicdns.goog. +short
(This same info, along with a small shell script to format it nicely, is available on their web site [0] as well.)

[0]: https://developers.google.com/speed/public-dns/faq


Thank you for the suggestions. I'll make sure they get relayed to the team.


There's a public list of IP ranges on the website: https://www.cloudflare.com/ips/

There's troubleshooting utilities in the CHAOS class, e.g. dig @1.1.1.1 id.server ch txt


I think i have questions to Google:

  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
  "74.125.46.8"
  "edns0-client-subnet 92.223.114.166/32"
  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
  "74.125.46.11"
  "edns0-client-subnet 176.36.247.0/24"
  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
  "74.125.74.3"
  "edns0-client-subnet 94.181.44.185/32"
  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
  "74.125.46.8"
  "edns0-client-subnet 92.223.114.166/32"
  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
  "74.125.74.3"
  "edns0-client-subnet 94.181.44.185/32"


Are you in .ru?

You might direct your questions at your ISP instead as it appears that someone may be intercepting your DNS requests.

---- To elaborate a bit, the differences in the (74.125.x.x) IP addresses being returned is somewhat normal and would usually be attributed to simple load balancing (as d33 pointed out). That is, 8.8.8.8 is actually a load balancer with several servers (including 74.125.46.8, 74.125.46.11, and 74.125.74.3) behind it.

The differences seen in the returned "edns0-client-subnet", however, are, well, "interesting".

As you've directed the requests to 8.8.8.8 directly (as opposed to your system's default resolver, whatever that is), the response returned for "edns0-client-subnet" should normally either be your own IP address or a supernet that includes it. (In my case, for example, the value is the static IP address (/32) of my own resolver.) When sending multiple requests such as you have, the "edns0-client-subnet" shouldn't really be changing from one request/response to the next; at the least, the values shouldn't change this much.

The fact that the responses are changing would seem to indicate that Google DNS servers are receiving the requests from different IP addresses when they should, in fact, all be coming from the same IP address (yours). These changes would lead me to suspect that someone (i.e., your ISP) is intercepting your DNS requests and "transparently proxying" them on your behalf.

If your ISP is using CGNAT (and issues you a private IP address) or something similar, that might explain it. Otherwise, I would be suspicious.


I have static public /32. My ISP intercepting DNS traffic for censorship purposes. But i strongly doubt that this traffic is forwarded somewhere.

  [user@v-fed-1 ~]$ dig txt o-o.myaddr.test.l.google.com @8.8.8.8 +short
  "173.194.98.4"
  "edns0-client-subnet 94.181.44.185/32"
  [user@v-fed-1 ~]$ dig txt o-o.myaddr.test.l.google.com @8.8.8.8 +short
  "173.194.98.4"
  "edns0-client-subnet 94.181.44.185/32"
  [user@v-fed-1 ~]$ dig txt o-o.myaddr.test.l.google.com @8.8.8.8 +short
  "173.194.98.4"
  "edns0-client-subnet 94.181.44.185/32"

  [user@v-fed-1 ~]$ dig txt edns-client-sub.net @8.8.8.8 +short
  "{'ecs_payload':{'family':'1','optcode':'0x08','cc':'RU','ip':'94.181.44.0','mask':'24','scope':'0'},'ecs':'True','ts':'1522656335.56','recursive':{'cc':'FI','srcip':'74.125.74.4','sport':'40964'}}"
  [user@v-fed-1 ~]$ dig txt edns-client-sub.net @8.8.8.8 +short
  "{'ecs_payload':{'family':'1','optcode':'0x08','cc':'RU','ip':'94.181.44.0','mask':'24','scope':'0'},'ecs':'True','ts':'1522656336.4','recursive':{'cc':'US','srcip':'74.125.46.4','sport':'51510'}}"
  [user@v-fed-1 ~]$ dig txt edns-client-sub.net @8.8.8.8 +short
  "{'ecs_payload':{'family':'1','optcode':'0x08','cc':'RU','ip':'94.181.44.0','mask':'24','scope':'0'},'ecs':'True','ts':'1522656337.96','recursive':{'cc':'US','srcip':'74.125.46.4','sport':'54992'}}"

127.1 is a DNS-over-HTTPS proxy.

  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @127.1 +short
  "173.194.98.11"
  "edns0-client-subnet 94.181.44.0/24"
  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @127.1 +short
  "173.194.98.11"
  "edns0-client-subnet 94.181.44.0/24"
  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @127.1 +short
  "173.194.98.6"
  "edns0-client-subnet 193.151.48.130/32
Some story from other (business) connection.

  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
  "74.125.74.3"
  "edns0-client-subnet 37.113.134.30/32"
  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
  "74.125.46.4"
  "edns0-client-subnet 85.29.165.14/32"
  [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
  "173.194.98.13"
  "edns0-client-subnet 77.234.25.49/32"


If you run those commands without the +short you will see that the TTL values for those responses are less than 59 (which for Google Public DNS, indicates they are cached, and explaining why the IP addresses shown are not yours).

The o-o.myaddr.l.google.com domain is a feature of Google's authoritative name servers (ns[14].google.com) and not of 8.8.8.8. You can send similar queries through 1.1.1.1 (where you will see that there is no EDNS Client Subnet data provided, improving the privacy of your DNS but potentially returning less accurate answers, as Google's authoritative servers do not have your IP subnet, but only the IP address of the CloudFlare resolver forwarding your query.


Aren't o-o.myaddr.l.google.com is intended for troubleshooting and should show correct ECS? o-o.myaddr.test.l.google.com always show correct ECS.


What is your question? I think we're seeing load balancing here.


Load balancing of ECS?


This is the Cloudflare resolver, right? What's the "privacy-first" part about? It's just another third party DNS host. They haven't changed the protocol to be uninspectable and AFAIK haven't made any guarantees about logging or whatnot that would enhance privacy vs. using whatever you are now. This just means you're trusting Cloudflare instead of Comcast or Google or whoever.


"We will never log your IP address (the way other companies identify you). And we’re not just saying that. We’ve retained KPMG to audit our systems annually to ensure that we're doing what we say."

Now, audits are generally not worth very much (even, perhaps even especially, from a Big Four group like KPMG), but for this type of thing (verifying that a company isn't doing something they promised they would not do) they're about the best we have.


Worth noting they have already edited the article (less than 2hours later) and taken out the "We will never log your IP" bit...

"We committed to never writing the querying IP addresses to disk and wiping all logs within 24 hours."

"While we need some logging to prevent abuse and debug issues, we couldn't imagine any situation where we'd need that information longer than 24 hours. And we wanted to put our money where our mouth was, so we committed to retaining KPMG, the well-respected auditing firm, to audit our code and practices annually and publish a public report confirming we're doing what we said we would."


Not sure if they edited anything. Your quote is from the blog post[1] but the aforementioned quote by tialaramex is from the 1.1.1.1 site itself[2].

[1] https://blog.cloudflare.com/announcing-1111/ [2] https://1.1.1.1


> Worth noting they have already edited the article (less than 2hours later) and taken out the "We will never log your IP" bit...

> "We committed to never writing the querying IP addresses to disk ..."

A DNS resolver does need to record the querying IP for at least a few moments because, you know, they have to respond to your query.

However, I don't know why they changed that sentence; it could be for other reasons too.


Seems like they're just trying to be clear.

It's not uncommon to retain logs like that for debugging purposes, abuse prevention purposes, etc, but then to go back later and wipe them or anonymize them.


>"Now, audits are generally not worth very much (even, perhaps even especially, from a Big Four group like KPMG)"

Indeed, see the recent KPMG scandal:

https://www.marketwatch.com/story/kpmg-indictment-suggests-m...


Seems we need an auditor auditor.


Quis custodiet ipsos custodes?


They were also implicated in tax evasion schemes in Canada.

http://www.cbc.ca/news/business/canada-revenue-kpmg-secret-a...


Where is the technical audit report published? Open access url please.


Having dealt with KPMG recently (which I do at least once a year...), I would not expect to see the report.

KPMG's risk department - the lawyers' lawyers - appears to be violently allergic to their customers disclosing any report to outside parties. Based on my experience you can get a copy, but first you and the primary customer need to submit some paperwork. And among the conditions you need to agree with is that you don't redistribute the report or its contents.

Disclosure: I deal with security audits and technical aspects of compliance.


> KPMG's risk department - the lawyers' lawyers - appears to be violently allergic to their customers disclosing any report to outside parties.

Isn't that the entire point of such an audit? To be able to present it to outside third-parties?

For examples, Mozilla (CA/B) requires audits for root CAs. The CA must provide a link to the audit on the auditor's public web site -- forwarding a copy or hosting it on their own isn't sufficient.


You'd think, but it's surprisingly difficult to get the real full audit report. Mozilla's root policy _does_ require that they be shown the report, and has a bunch of extra requirements in there to ensure they're more detail, rather than some summary or overview document the auditors were persuaded to produce for this purpose. But the CA/B rules would allow just an audit letter which basically almost always says "Yes, we did an audit, and everything is fine" unless the auditors weren't comfortable writing "everything is fine". And almost always they feel that a footnote on a sub-paragraph buried in a detailed report is enough to leave "everything is fine" as the headline in the letter...

If you've ever been audited for some other reason, you'll know they find lots of things, and then you fix them, and that's "fine". But well, is it fine? Or, should we acknowledge that they found lots of things and what those things were, even if you subsequently fixed them? The CA/B says you have several months to hand over your letter after the audit period. Guess what those months are spent doing...


Auditors will confirm the result of the audit but usually not disclose the content of the audit report.


Does KPMG employ technology people? I thought they did only financial audits.


First of all, KPMG is the name of a group. All the Big Four are arranged as group companies, a single financial entity owns the name (e.g "KPMG", "EY") from some friendly place, (London in all but one case) and licenses out the right to operate a member company to professional services companies in various jurisdictions around the world. The group has the famous name, and sets some rules about training and compliance, but the employees will (almost all) work for the local member companies even though reporting for lay people will say the group name, as they do here.

Secondly, the idea in audit is not really about digging into the engineering. So although they will need people who have some idea what DNS is, they don't need experts - this isn't code review. The auditors tend to spend most of their time looking at paperwork and at policy - so e.g. we don't expect auditors to discover a Raspberry Pi configured for packet logging hidden in a patchbay, but we do expect them to find if "Delete logs every morning" is an ambition and it's not anybody's job to actually do that, nor is it anybody's job to check it got done.


I think it's somewhere in between, the article itself states:

"to audit our code and practices annually and publish a public report confirming we're doing what we said we would."

I run an investment fund (hedge fund) and we are completing our required annual audit (not by KPMG). It is quite thorough, they manually check balances in our bank accounts directly with the bank, they verify balances directly off blockchain (it's a crypto fund) and have us prove ownership of keys by signing messages, etc. And they do do a due diligence (lots of doodoo there) that we are not doing scammy things like the equivalent of having a raspberry pi attached to the network. Now this is extremely tough of course, and they are limited in what they can accomplish there, but the thought does cross their mind. All firms are different, but from what we've seen most auditors do decent good jobs most of the time. Their reputation can only be hit so many times before their name is no longer valuable to be an auditor.


How do we know they are not lying (or forced to lie, they are a US company after all)?


Cloudflare is making a public pronouncement that they're not going to sell your DNS data nor track your IP address, with the implication that they will also not use the usage data to upsell you services. That's about the only additional "privacy" edge they offer.

In the same breath, they insinuate that Google both sells and uses DNS usage from their 8.8.8.8 and 8.8.4.4 resolvers.


They are NOT saying Google is lying and collecting the data. They are saying the business model of Google inherently provides such incentive.

Cloudflare is somewhat right: Means, Motive and Opportunity - but for a conviction you have to prove someone acted on the Opportunity. The Motive of Google is tampered with severe risk for loosing trust.

Cloudflare can make an argument they are fundamentally better positioned and that is all they do. As with all US based operations the NSA may cook up some convincing counterarguments and we may never know.


>"They are NOT saying Google is lying and collecting the data."

The OP did not say that cloudflare is "saying" that. The OP very clearly said they are "insinuating" it. And yes under the heading "DNS's Privacy Problem" the post mentions:

"With all the concern over the data that companies like Facebook and Google are collecting on you,..."

I think that juxtaposition of this statement under a bolded heading of "DNS's Privacy Problem" is very much insinuating that.


Bear in mind, Google's changed its mind before and can again at any time. For instance, when they bought DoubleClick they promised not to connect it with the Google account data they had. Then they changed that policy later.


That does not change the the fact that Cloudflare is insinuating something something about Google's DNS.


Is the suggestion that a company whose main business is targeting ads based on collecting data about you might be collecting data about you an unfair insinuation?


Please follow the thread - the question of whether an insinuation if "fair" is not what's being discussed. What's being discussed is whether or not Cloudflare said or insinuated that there were privacy concerns with using 8.8.8.8.


It's clear what you meant, but for whatever it's worth, I think the word you wanted was "tempered", not "tampered".


For what it’s worth, you missed to point out “loosing” vs. “losing” in that comment (where it talks about “loosing trust”). :)


That looked more like a garden-variety typo than a bona fide eggcorn, so I gave it a pass ;)

https://en.wikipedia.org/wiki/Eggcorn


> they insinuate that Google both sells and uses DNS

I don't think it's intended to say anything about Google specifically. Keep in mind that there are many other DNS services out there, and some of them are known for being pretty scummy, e.g. replacing NXDOMAIN results with "smart search" / ad pages.


>"I don't think it's intended to say anything about Google specifically"

Google is mentioned 13 times in this post and their resolvers 3. That's 16 total mentions of Google in their post.


I was specifically referring to the statement that Cloudflare won't sell your DNS history.


Yes they have:

"Privacy First: Guaranteed. We will never sell your data or use it to target ads. Period. We will never log your IP address (the way other companies identify you). And we’re not just saying that. We’ve retained KPMG to audit our systems annually to ensure that we're doing what we say.

Frankly, we don’t want to know what you do on the Internet—it’s none of our business—and we’ve taken the technical steps to ensure we can’t."


> Frankly, we don’t want to know what you do on the Internet—it’s none of our business

In the DNS resolver space, what is their business?


They want fast resolution of names that point to websites hosted by Cloudflare. Cloudflare makes their money selling their network to businesses that use it, and anything that makes that service better for the end-user increases customer stickiness.


Traffic from heavily censored regimes to its big customers, which often end up being censored due to user contributions, I suppose.


Making the internet fast and reliable, and arguably DNS resolution plays into that.


Could be a precursor to launching an OpenDNS competitor.


Is OpenDNS even as relevant as it was earlier, before Google DNS appeared (and then OpenDNS was bought by Cisco)?


Maybe not _as_ relevant, but still a considerable number of clients are configured to trust OpenDNS, and their far more ambiguous stance on what exactly this is for is appealing to some people. For example, OpenDNS says yes, absolutely it is their business what you're looking up, and maybe you are a Concerned Parent™ who wants to ensure their children don't access RedTube, so that feels like a good idea.


I was thinking more along the lines of their SME offering. DNS filtering is an important layer in network security and CloudFlare’s position of being in the middle of a large portion of Internet traffic, alongside now trying to attract a chunk of general DNS queries, potentially gives them a great deal of insight into who the bad actors are.


Serious question: where is that quote from? The link above is just to the resolver address.


Quote is at: https://1.1.1.1


Not opening for me.



I think the whole point for such free services is to log that data and extract statistical meaning out of it - in this case, they pledge to use an anonymized format. On the other hand CloudFlare's mission (ensure secure, solid end to end connectivity) is much better aligned with the user's needs than Google's mission (sell more ads).


On the contrary, they've taken 2 big steps that are better than ISPs (not sure about Google):

* no logging

* DNS over HTTPS


Google is one of the first ones using DNS over HTTPS.

BTW if you want to use DNS over HTTPS on Linux/Mac I strongly recommend dnscrypt proxy V2 (golang rewrite) https://github.com/jedisct1/dnscrypt-proxy and put e.g. cloudflare in their config toml file to make use of it.


The whole point of encrypting DNS traffic is to hide it from the likes of Google.


For me personally it is much more important to hide my DNS traffic from my ISP instead of Google, etc., even though I don't live in the US.

I pay them to access the internet, every further information they gather about my internet activity does not mean any benefit for me.


Hiding DNS traffic from your ISP is pointless when you have to give them the IP that gets resolved anyway for them to route your traffic.


Not really. Typically the query includes much more information (the site you want to visit) than the response (an IP potentially shared by thousands or millions of sites).


Even with https, the name of the site is sent in clear when the connection to the site is established (this is SNI).


Back when they chose this design for SNI, I’m sure someone argued that it was fine because DNS had already leaked the hostname anyway :)


It's really hard to fix this. https://datatracker.ietf.org/doc/draft-ietf-tls-sni-encrypti... is the state of the art -- note that's a Draft, and really, really not finished, help is doubtless welcome.

If it was easy, it would have been done during the TLS 1.3 process, but after a lot of discussion we're down to basically "Here is what people expect 'SNI encryption' would do for them, here's why all the obvious stuff can't achieve that, and here are some ugly, slow things that could work, now what?"


It is hard because of the TLS's pre-PFS legacy and to some extent also because of (very meaningful) intention to reduce roundtrips. The way to do SNI-like stuff is obvious: negotiate unauthenticated encrypted channel (by means of some EDH variant, you need one roundtrip for that) and perform any endpoint authentication steps inside that channel. This is what SSH2 does and AFAIK Microsoft's implementation of encrypted ISO-on-TCP (eg. rdesktop) does something similar.

Edit: in SSH2 the server authentication happens in the first cryptographic message from server (for the obvious efficiency reasons), and thus for doing SNI-style certificate selection there would have to be some plaintext server-ID in first clients message, but the security of the protocol does not require that as long as the in-tunnel authentication is mutual (it is for things like kerberos).


So, it feels like you're saying this is how SSH2 and rdesktop work, and then you caveat that by saying well, no, they actually don't offer this capability at all it turns out.

You are correct that you can do this if you spend one round trip first to set up the channel, and both the proposals for how we might encrypt SNI in that Draft do pay a round trip. Which is why I said they're slow and ugly. And as you noticed, SSH2 and rdesktop do not, in fact, spend an extra round trip to buy this capability they just go without.


A load balancer can chose the correct backend by using the SNI. So there is a use for being unencrypted.


You're still leaking that information due to SNI.


This does not make sense. Either people are not concerned about hiding their traffic or if they are it follows they would be equally if not much more concerned about Google that can track them across devices and build far more indepth invasive profiles than the ISP.

Aside it's strange https everywhere has been pushed aggressively by many here under the bogeyman of ISP adware and spying while completely ignoring the much larger adware and privacy threats posed by the stalking of Google, Facebook and others. It is disingenuous and insincere.


Most fears of ISPs have been stoked primarily by tech companies, who invest a lot more money into marketing than the ISPs do.


I can only really discuss the UK, since that's the only place where I've bought home ISP service.

Only a handful of small specialist firms actually just move bits in the UK. Every single UK ISP big enough to advertise on television is signed up to filter traffic and block things for being "illegal" or maybe if Hollywood doesn't like them, or if they have "naughty" words mentioned, or just because somebody slipped. If you're thinking "Not mine" and it runs TV adverts then, oops, nope, you're wrong about that and have had your Internet censored without realising it. I wonder how ISPs got their bad reputation...


I've switched to cloudflare and none of the dns leak tests are showing my DNS, which I find interesting. They always showed google.


Did you read the page? They're supporting DNS over TLS and DNS over HTTPS - both changes to the protocol to make in uninspectable. They've also said they're not logging IP info and they're getting independent auditors in to confirm what they're saying. Sounds trustworthy to me


Both encrypted extensions are of course inspectable at the end-point, which is the privacy model being discussed.

What is intriguing to me is why Cloudflare are offering this. Perhaps it is to provide data on traffic that is 'invisible' to them, as in it doesn't currently touch their networks. Possibly as a sales-lead generator.

Or is the plan to become dominant and then use DNS blackholing to shutdown malware that is a threat to their systems?


The goal is to make the sites that use Cloudflare ridiculously fast by putting the authoritative and recursive DNS on the same machine (for clients who use 1.1.1.1).


Im probably being naive, but maybe altruism? At least if you buy into their making the internet better rhetoric


Cloudflare is already a significant enough player in handling Internet traffic. Maybe the company does want to do good for the sake of doing good, but I’m wary of companies taking over in this manner and making the Internet more like a monolith than a distributed system.


It seems like bait-and-switch though? They tell about DNS over https and dns without logging, and then direct to an installation instruction where you can learn to start to use, "DNS without logging", but nothing that's encrypted? What am I missing?

More

Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: