It doesn't get a much love as it should, but it is probably the best way to secure encrypt your DNS requests right now. The protocol was initially developed by OpenDNS, but many resolvers support it right now (cisco, cleanbrowsing, etc). The list of supporting services is impressive:
Prediction: DNS-over-TLS won't win. I don't think it's going to be able to get around the non-standard port issue.
Instead, I think DNS-over-HTTP is gonna be the champ. The overhead of HTTP is a minor issue, but, I think using a standard port more than makes up for it. I think the real inflection point is going to be once QUIC is more widely deployed. Combined with TLS's 0-RTT connection setup, we'll be able to get back to answering a DNS query in a single round trip (like today), but with assurances that the data wasn't monitored or tampered with between the client and the recursive resolver.
Games run a subset of network connections: Consumer Homelines.
They don't run in corporate networks, over public wifi or over mobile networks, all three frequently block non-standard ports (my city's free wifi blocks everything except 80 and 443, DNS is hijacked).
These three network types are also important, maybe more important than simple home landlines since they affect the people paying shitloads of cash or represent a very significant marketshare.
Besides public wlan this isn't an issue. In corporate networks it may be company policy to use their resolvers. External resolvers might not work for internal names, so their use is probably limited in the first place.
Mobile networks should allow all ports, if not call your regulator.
So 1 out of your 3 are actually important. Also, firewall rules can be changed.
I'll give you corporate networks though that's more guesswork than actual hard data on that. Plus point still stands that other protocols will be blocked unless using 443 or 80 ports.
Mobile networks in my experience block a variety of protocols and intercept DNS fairly regularly, even in presence of DNSSEC or DNSCrypt. Not sure what calling the regulator would give me, they're not responsible for what ports the network blocks. Not every operator is in the US, a majority of people do not live in the US and may want to use the internet without the operator playing around in DNS responses.
>I'll give you corporate networks though that's more guesswork than actual hard data on that. Plus point still stands that other protocols will be blocked unless using 443 or 80 ports.
Yes. But this is a corporate network. It's not up to you to decide which protocols should be allowed or not. (Unless you are in the position to do so of course)
I know it's quite easy to tunnel everything through something, but why do that in a corporate environment. If you need to access X then get access to it (via proper channels?)
>Mobile networks in my experience block a variety of protocols and intercept DNS fairly regularly, even in presence of DNSSEC or DNSCrypt.
But do they block port 853 and if so, on what grounds? They sell you an Internet access, if a port is blocked, this is no longer a valid Internet access.
If the port is not blocked however, then the ISP can no longer play around in DNS responses.
>But do they block port 853 and if so, on what grounds?
I don't know why they decide to block it but they do. Most of the time it's to prevent spam or to protect customers (ports lower than 1024 are sometimes blocked for that reason).
>They sell you an Internet access, if a port is blocked, this is no longer a valid Internet access.
That's a laughable argument, they don't sell you unrestricted internet access, that's already given by the fact we have datacaps and SMTP traffic from port 25 blocked.
Fact is, ports get blocked. The mentioned networks do it a lot. We should accomodate these restrictions because alternatively the software breaks for a consumer and if it's a choice between "uninstall DNSCrypt/DoT/DoH" or "complain to ISP or operator" then most consumers will not complain to the ISP because prior to DNSCrypt their internet worked for all they care.
>That's a laughable argument, they don't sell you unrestricted internet access, that's already given by the fact we have datacaps and SMTP traffic from port 25 blocked.
Some do and I would complain about it. Maybe port 25 has a spam related reason, but every other port has not.
But to come back: crappy networks are not a reason to ditch "custom" ports. Fix the network.
>But to come back: crappy networks are not a reason to ditch "custom" ports. Fix the network.
I heavily disagree. Broken and crappy networks exist and for the user it's easier if the software we write works on broken and crappy networks. Most users will not complain to the network if YOUR software doesn't work on it but other software works fine.
You can disagree all you want. Nothing will change if you build for crappy networks. Build your application so that it can detect such a setup and tell the user.
It works for gaming. This alone is proof that it can work.
It works for gaming because most games that need non-crappy networks happen to run on connections that are more rarely crappy networks.
And despite that I get regular issues when I need to play true p2p lobby games because of various people having a variety of difficult-to-work-around routers or ISPs. So in part atleast it doesn't work for gaming either. I don't consider your proof valid.
If you consider port 80/443 as essential, then you can add 853 to that list. Every network who blocks it, blocks an essential port, similar to blocking 443. This is the angle one needs to work with here. Why keep the status quo for all eternity?
People have difficulties port forwarding. It's fixable, but then again, this is a different issue. You are talking about incoming ports, not outgoing.
Adding ports to a list of "essential" ports is hard considering some middleboxes haven't been updated since IPv6 standard was published (1998) and those aren't even the oldest once I've seen. They will drop and mangle packets as they please and unless it costs them millions of dollars a month these boxes will not be replaced or reconfigured, period. That is reality on the internet.
Corporate will not configure anything unless it costs them millions. Same for Mobile (which breaks frequently but thank you very much) and same for wireless networks.
Nobody will update their machines that haven't been updated since the 90s because your protocol needs a new port to be freely accessible. Nobody will thusly adopt it and in turns nobody will update their machines.
The TCP/UDP/ICMP trio has been ossified as the ground protocols of the internet, ports 53/80/443 for traffic and TLSv1.2 for SSL traffic. Almost everything outside these parameters breaks in a variety of networks from "bad performance" to "simply doesn't work", depending on whether you sit on a VPS or a normal landline like business or consumer DSL.
We have to deal with that if we want updated protocols to even remotely have a chance of adoption. As I have repeatedly said, this is the reality of the modern internet, face it or have you protocol forgotten and unused.
>Corporate will not configure anything unless it costs them millions.
Of course they will if it has a purpose. Not everything is driven by money as you suggest. Besides: corporates have internal resolvers anyway.
>Same for Mobile (which breaks frequently but thank you very much)
No. If it does, complain to your ISP. There is no reason why an outgoing port should be blocked.
>and same for wireless networks.
They will update too.
>Nobody will update their machines that haven't been updated since the 90s because your protocol needs a new port to be freely accessible. Nobody will thusly adopt it and in turns nobody will update their machines.
Dangerous assumption. Opening a port does not need an update. It's TCP, the same protocol as http on port 80 or 443.
First, I think it gives too much power to the browsers. Firefox was already taking some dangerous choices with DNS over HTTPS on some of their recent changes. Chrome as well, doing changes that will benefit Google, in detriment of the rest of the web.
Second, I think it is an overall bad design choice to tunnel a lightweight protocol on top of HTTP on top of TLS. Instead of just tunneling it under TLS.
I don't really see how encoding a DNS request as HTTP gives extra power to browsers. What power do they gain by writing "GET www.example.com" after a TLS handshake with port 443 versus writing "1234 0 0 0 0 1 0 0 1 0 0 0 ..." after a handshake with port 53?
Browsers can already do whatever they want to the URL you type in. What DNS packets look like does not add or remove any power.
Meanwhile, https isn't exactly heavy, and it's very well supported by everything. Every programming language has an https library. Writing an DNS-over-HTTPS program will be 3 lines of code.
> First, I think it gives too much power to the browsers. Firefox was already taking some dangerous choices with DNS over HTTPS on some of their recent changes. Chrome as well, doing changes that will benefit Google, in detriment of the rest of the web.
I really don't understand how DNS-over-HTTPs benefits Google to the detriment of the web anymore than over TLS would. I'm not really sure how either hurts the web.
> Second, I think it is an overall bad design choice to tunnel a lightweight protocol on top of HTTP on top of TLS. Instead of just tunneling it under TLS.
Port 443 is generally unblocked. Port 853 is often blocked. How does tunneling via HTTP on port 443 hurt anyone? Yeah, it's ridiculous, and, a result of ridiculous middle boxes imposing silly policy. But, if you can't change that (and you can't), then, what is the harm? A few wasted bytes? So what? As long as it still fits in a single frame, it still a single round trip on the network.
It had everything to do with it, as more and more is being forced through HTTPS, because I don't know, it's more secure or it's the only port not being filtered. There were a few times I has to run sshd on port 443 (on a server I control---a preventive measure) because a network I was forced to use (a "public" wi-fi) only allowed ports 53, 80 and 443 outbound.
Yeah, DNS over HTTP/2, (or the inevitable DNS over upcoming connection protocols) will definitely succeed.
I think the big paradigm shift is "let's decouple DNS interactions from a specific transport" - and once you open up to that concept, the option of having multiple transports for different use-cases as things move forward seems practical.
I deployed DNS-over-TLS on Cambridge University’s central recursive DNS servers last week, and they immediately started receiving traffic from Android P users - not very much traffic, a few queries per second, but not negligible. I did some followup investigation of how Android behaves in the wild and posted them to the IETF DoH list (and the dprive list but for some reason those copies did not go through) - see https://mailarchive.ietf.org/arch/msg/doh/I-ytiO6ykbt9krrC9F... and the corrections and further information in the replies.
I still need to verify that TCP fast open is working, to minimize the DoT latency.
Android 9 Pie is opportunistic: it tries to connect to port 853 and sends a probe query to make sure the server behaves plausibly well. Other clients need explicit configuration.
Article starts by stating that DNS doesn't provide a means to guarantee integrity of the returned DNS data.
Then mentions DNSSEC as a protocol which exists to provide such guarantee and promptly dismisses it along with DNSCURVE and DNSCRYPT as protocols which have been so infrequently deployed as to be non-existent.
Further on states that DNS over TLS and DNS over HTTPS don't solve the integrity problem but that is ok because DNSSEC will provide that.
There's two ways to ensure the authenticity of data delivered over the Internet. You can authenticate the content or you can authenticate the channel.
Overwhelmingly, practical security schemes on the Internet rely on channel security. We rely on TLS to ensure the integrity of the DOM on websites; we don't cryptographically sign the pages themselves.
All things being equal, you'd like to be doing both things. You'd like to have cryptographically signed web page DOMs, for instance (among other things, it would make web crypto a lot more useful).
But all things aren't equal: content authentication is difficult to manage in practice, and every security protocol we adopt has a cost.
Long story short: if you can protect the channels used by DNS lookups, you can get by without protecting the content. That's roughly the idea behind DoH and DoTLS.
The reality though is that all you really need is "DNS over TCP" (which, of course, we've had since basically the beginning). Practical forgery attacks against TCP DNS are difficult enough as to not be worth the trouble.
Practical forgery attacks against an arbitrary client are hard, but configuring a public WiFi AP to intercept your favourite repeating-digit DNS server is trivial. Lots of people use public WiFi!
In such a scenario a VPN is a more secure answer than DNS-over-TLS, but this isn’t a realistic answer for the average user. It has to be something that is free and easy to enable.
You're correct DNSSEC and DNS-over-TLS/DoH (DNS-over-HTTPS) both provide different, and necessary, aspects of securing records in DNS.
DNSSEC == authentication of records.
DNS-over-TLS/DoH == privacy, and authenticity of the server/client.
Both are independently useful and enforce different things for us. The biggest issue with DNSSEC is that since it's not been widely adopted, what should you do with records that either are not signed, or are incorrectly signed? Most software doesn't really have a great way of raising DNS issues to the application in a way that users or something else could provide a security exception.
I recently configured my OPNsense router, for DNS over TLS with Quad9, with certificate domain validation. It uses included Unbound resolver. Not sure what I achieved, but it does feel good :)
Switching to unbound seems like extra work. I kept dnsmasq on my EdgeRouter and just pointed it at doh-client from [0] which is trivial to cross-compile. I’m using Google’s dns servers as upstream.
It is extra work either way. What is better performance though?
I'm using dnsmasq with Pi-Hole's blocklists, and forwarding to unbound for DNS over TLS. Forwarding to another client such as doh-client could also work though I'm not sure how this would work with Quad9.
My router is being backup for this ensure there's less load on the MIPS machine.
Go is cross-platform, sure. However dnscrypt-proxy [1] is also very portable.
Its based Vyatta/VyOS [1]. There's a way to get OpenBSD running on it as well, but I don't have a link handy.
The router isn't open hardware but its a good bang for the buck (I also run WireGuard on it, btw). If you want a fully open source router, I can recommend having a look at Router7 [2]. The author's using a PC Engines APU2.
Downside is you gotta do a lot of work yourself, just like with OPNSense. But I like OPNSense, even though the hardware from the company behind it is expensive the same is true for PFSense. And the company behind that isn't so friendly...
What work? Install is super easy ... I use OPNsense on small, fanless, cheap 'mini PC' with 2 LAN ports, you buy from aliexpress. Full x86-64, Intel with AES-NI support, for like $200 with 4GB RAM and 40GB ssd
4 GB RAM and a 40 GB SSD on a router??? I don't need that.
What work? Work to maintain it, test it, etc. Essentially, every time a software update is rolled out you do not know for sure if it is going to work flawless on your platform. For a random home network that might be sufficient; for a corporate network not so much.
I know about Aliexpress (and the like), but I don't find comparing Chinaware with non-Chinaware fair without taking that into account as a minus. Not that I wouldn't go that route if I would go for DIY though.
Router7 uses coreboot and a heartbeat to restart the machine if it fails.
x86-64 still uses more kWh than this MIPS machine. The ER-L has 3 ports, allowing physically separated networks. Depending on your setup you can even use both. The ER-X is less powerful and is MIPS32, though does support more hardware offloading (and WireGuard has optimalisations written in C for MIPS32).
Routers must run open source software, no exceptions, they are keys to the kingdom, corporate or home, no difference. FreeBSD/OpenBSD is de facto standard. Good projects like OPNsense test their production releases extensively.
Hardware is your choice, but x86 gives you the best compatibility, and kWh is good, x86 CPU power management, mine uses less than 1W, max TDP is 6W.
Cisco, Juniper, and other closed source ones have a history of backdoors [0]. Consumer grade routers are joke.
You were dependant on Cisco and Juniper routers whilst you posted this very message.
I've used the mess called Quagga back in '00s. No, thank you. I did like OpenBGPd, but it isn't a necessity to have BGP support on every router. Linux can be suffice on a router. Even though I do prefer PF, nftables seems promising.
I don't want to use x86-32 for a myriad of reasons. I don't need the software compatibility x86-32 offers.
Yeah, and? There's HTTPS between my browser and news.ycombinator.com as well. So what does that have to do with my ER-L?
There's no need to link to Wikipedia's HTTPS either. We both know what that is.
FYI: The malware you linked was for older or badly configured versions of those routers. If you don't upgrade OPNSense or Linux/BSD in general you're also in trouble.
There is a HTTPS, between HN and me. "HTTPS creates a secure channel over an insecure network. This ensures reasonable protection from eavesdroppers and man-in-the-middle attacks ..."
I used Stubby and Quad9 for a few months last year but I found the latency pretty terrible unfortunately. I would be curious to hear what other people are using and what their experience has been.
I used SSH SOCKS tunnels with stubby to keep myself online inside China's state firewall two recent trips. commercial VPN are routinely slowed down or blocked, if you have the luxury of an SSH enabled host "outside" you can use, Stubby and this are good, to get around DNS rewriting tricks and port/ip filters.
Yes, you have have slower paths, trombone paths. But in the circumstances I was in, Stubby was a godsend.
Also check out the dns security option in Android Pie.
A node in Brisbane. I checked path, the AS path was pretty tight, china-telecom to telstra-reach and then into the IX where I have a FreeBSD host. I was testing web speeds to the company on non-SSH/SOCKS paths, they were pretty bad oddly, quite heavily asymmetric, via the US and Japan and in some cases Europe. China-Australia via Europe is not very optimal.
It has to be said if you're trying to bypass DPI, speed isn't your main concern. I tolerated pretty low packet rates. If I had to VOIP it would have been awful
Really? They both have anycast pops all over the world - and in my tests, very close performance (couple of ms of difference - if that) to be felt by anyone.
You do realise your anecdotal "brief testing" (with n=1) in no way is telling us anything not in the least because you keep your location and network concealed? What might be better in your for us unknown test case scenario might be different in other scenarios.
Again, if that's the case, you should open a trouble ticket, so the problem can be fixed. It won't get fixed if there isn't a work ticket and nobody knows where your traffic is coming from or going to.
Yes. IIRC from my testing a while back, both 1.1.1.1 and 9.9.9.9 close TLS connections either immediately or after a short timeout. Short timeout could work if you're running a larger network, but not so much at home.
What's the point of confidentiality for DNS? Can't an attacker pretty easily get IP-to-DNS mappings to discover who you're talking to? I guess not in the case of VPNs/TOR?
DNS hijacking is real and very annoying. The main benefit of these protocols is in the integrity of the DNS resolution data. It allows you to use any DNS server you want, without having your ISP modifying their responses.
Not in the case of Tor, but also not in the case of almost all/most cloud hosted services.
For example, consider that Cloudflare proxies about 10% of the Internet. Well, if you request a site they proxy, and DNS is in the clear, it's obvious who you are connecting to.
But if you request a site and the DNS is encrypted, you could be visiting any one of 10% of the sites out there.
Similarly, if hosting on AWS or Google Cloud platform, there's a LOT of other services hosted in those IP blocks, and IPs change frequently, so there's a significant degree of ambiguity.
This is all in addition to fixing the threat of DNS leakage for VPN/Tor connections.
CloudFlare's customers can choose whether the backhaul, between CloudFlare and their own web servers, is HTTP, HTTPS with a CloudFlare issued private certicate, or HTTPS using publicly trusted certs from the Web PKI.
If you choose either of the latter two options, bad guys can't MITM you, the middle option has the benefit that they can't even MITM you by subverting a public CA (since only CloudFlare's own certs are trusted) the latter option has the benefit that you can "just" switch off CloudFlare and your site now works as an ordinary HTTPS site with no changes, if you ever want to do that.
And other large parts of the internet are "MITMed" by AWS, Heroku, Microsoft Azure or other hosting companies then. For some reason people don't make the same argument in every thread about AWS though.
It's just my gut feeling, that CF might abuse its unique position, having access to large part of internet traffic in plaintext. What other CDN gives free services? Hint, why Google Analytics is free?
They are a CDN, they are storing data and delivering traffic under contract with the site owner. That's not strictly a hosting company, but that's pretty much irrelevant: They see traffic because the site owner has chosen and contracted them to provide a service that requires them to see traffic. The same way that traffic flows through AWS load balancers because the site owner configured that, or through the servers of a traditional hosting company because the site owner choose to host there.
One certainly can argue against Cloudflare specifically, or against centralization in general, but IMHO "they see traffic" isn't a very useful argument on its own.
https://github.com/jedisct1/dnscrypt-proxy
It doesn't get a much love as it should, but it is probably the best way to secure encrypt your DNS requests right now. The protocol was initially developed by OpenDNS, but many resolvers support it right now (cisco, cleanbrowsing, etc). The list of supporting services is impressive:
https://download.dnscrypt.info/dnscrypt-resolvers/v2/public-...
On the other hand, DNS over [HTTPS|TLS] are pretty new and don't have as much support, except for a few players. A good list if here as well:
https://www.reddit.com/r/sysadmin/comments/976aj2/updated_li...