Wow, this is terribly misleading DNSSEC propaganda. It tells me:
"Protected from redirection to false IP addresses (DNSSEC)"
What does that mean? It means that whatever other DNS server I use seems to verify DNSSEC signatures (I use Google's DNS fwiw). Yet this doesn't provide any reasonable sense of protection, as the connection to that DNS server may very well be compromised.
This would very well show DNSSEC protection in an open public wifi if the provider decided to enable DNSSEC.
The question for many residential internet users is: Just because I set my DNS to Google's, do my request really arrive there? Or does my ISP use transparent DNS proxies?
I know that for many ISPs around here (Telekom especially), setting your DNS doesn't have any effect unless you run a local resolver (or DNScrypt).
It's a different threat model. Classical DNS (i.e. without port randomization and a whole host of other tricks) is very easy to spoof from all over the internet.
Inserting yourself between a client and a server is way more difficult.
Note that from the point of traffic analysis, you still don't want your TLS traffic to go through a third party.
So if your thread model mostly includes nation-state attackers then DNSSEC is only useful for DANE. If you also want to secure a lookup of, for example, pool.ntp.org then DNSSEC for A and AAAA records also makes sense.
> If you also want to secure a lookup of, for example, pool.ntp.org then DNSSEC for A and AAAA records also makes sense.
The fun part begins when you realize you can't validate DNSSEC because your time drifts too much. So how do you get your initial sync from pool.ntp.org with DNSSEC validation enabled?
If the DNSSEC validating resolver is a server then it is usually not an issue. Most server hardware has battery backed real time clocks. In the odd case that you are bootstrapping a server you would have to set the time manually or make setting the time part of bootstrap process.
For embedded systems that don't have a battery backed real time clock and want to do local DNSSEC validation this is indeed an issue.
There are plenty of hacks to make it work, but no real standard.
Yes. I like what they are trying to do, but they don't seem to actually think things through. It took me forever to get some of the server tests fixed, where it would report that my server didn't properly support IPv6 (in DNS) when in fact it did and just their test was wrong.
For me it says now 'your DNS service providers are:' and then the name of the netblock owner. The actual name server is in my network.
I thought DNSSEC was supposed to be verifiable by the client? If it isn't then it's pointless in the way that you suggest, but I find it hard to believe that hole was left.
Congratulations, you've understood the main hole of dnssec.
The thing is: you can verify dnssec on the client. In theory. It's just that 99,9% (rough estimate, may be higher) of people don't.
You'd have to run your own resolver. Which might work, if your ISP isn't doing funny things with your DNS traffic. Which some ISPs do. Which means it can't be deployed widely.
This thing was built in the 90s when people assumed you have some dns that some admin you trust manages in some trusted network. Moving it to today's internet is pretty much impossible.
What do you mean, "run your own resolver"? That's a fancy name for the library that the application uses to speak DNS, not a separate thing that has to be set up, run, etc, separately.
Instead of asking "what IP is Google.com?" to your configured DNS server, you traverse the whole chain. First, you refer to your list of root zones: which servers can answer about .com? Ok, ask them which DNS servers google.com should have. Next, send the request directly to those servers. Now you get a response that you can use.
This chain can get really long depending on the service's DNS configuration. And this whole time every request has to come back DNSSEC signed.
If I run my own resolver, with a hardcoded [1] trust anchor, how could an ISP affect me regardless of what funny things it does with my DNS traffic...?
Well, the traffic is not encrypted or otherwise protected, so a firewall trying to be "smart" could do all kinds of things. E.g. not letting you connect to other DNS servers at all or filtering all queries with unusual record types.
> Are we on the same page that with DNSSEC activated on a local resolver one would either get an authentic answer, or nothing at all?
Sure. But it's not very relevant, because almost nobody does that. And that's unlikely to change, because getting nothing at all isn't a very desirable state of affairs.
And given that forcing local DNSSEC resolvers in an OS or a browser would likely mean that a large share of your userbase will get nothing at all this is pretty much impractial.
> And that's unlikely to change, because getting nothing at all isn't a very desirable state of affairs.
It worked for HTTPS - more and more browser builds refuse to show you stuff, with no workaround, even if there is nothing wrong with the certificates ( cough-sha1-cough-or-cough-chrome-cert-transparency-cough ). Yet I don't see any users revolt.
Claiming that having an all-or-nothing HTTPS is a-ok, yet having all-or-nothing DNS is unacceptable is... inconsistent.
Correct. And if the local resolver is sufficiently close to your client that there is very low risk of an attacker getting into your local network, then you can have a higher degree of trust in those validated answers from your resolver.
Only if your ISP doesn't molest DNS packets. More importantly: this only works for a small set of nerds; it doesn't scale to every user on the Internet --- this is the worst kind of "insecurity for thee not me". For refusing to make compromises like this, and instead insisting that sound cryptography be made available to all users, Moxie and Trevor just won the Levchin Prize at RWC.
Notice also that Signal provides a massive amount of cryptographic security to billions of people without needing a PKI controlled at its roots by world governments.
It can be verified by the client [1] (wikipedia link also has RFC source), but typically the verification is done by the resolver, which introduces the problem that a client has to trust the resolver and the network from resolver to the client (last mile problem)
You are right, but the wordings are chosen with the average internet user in mind. Luckily the 'DNSSEC'-word between brackets at the end let's you, the more tech savvy user, know what was really ment.
OK, so for those of us naive about DNS security, can someone summarize the current best practice for DNS on gateway routers and roaming endpoints (laptops)?
The short answer is: pretty much everyone uses normal DNS, because the many show-stopping problems with DNSSEC includes the insane design decision not to protect the "last mile" between the stub resolver on your own machine and the "DNS server" (technically: recursive cache) that DHCP configures.
If you're using Google's DNS, it will (pretty much pointlessly) validate DNSSEC records for you --- but the link between your computer and Google's DNS servers are completely unprotected (any attacker could simply trick your browser into believing there was no such thing as DNSSEC).
This doesn't much matter because only a tiny, tiny fraction of all DNS records are DNSSEC-signed. The modal experience for companies that do take the trouble to sign their DNS records is "taken offline completely by DNSSEC configuration mistakes". There is virtually no upside to participating.
The good news about all of this is that there's really nothing you need to do to have good DNS OPSEC. Just do what everyone else does, including pretty much all security people: delegate security to a higher layer of the Internet stack.
Google of course wants people to continue to use their DNS resolvers. So it is in their interest to focus only on techniques to improve access to their resolvers.
One thing that happened in recent years is that a very nice library called 'getdns' has been developed. Getdns does local DNSSEC validation but also contains various ways of accessing DNS servers and resolvers ("Roadblock Avoidance")
I use getdns in ssh for SSHFP, to obtain SSH key fingerprints from DNS. If DNSSEC doesn't work then SSH fails (or complains about an insecure connection). So far my experience is that is works.
The problem with DNSSEC local validation is that it doesn't protect your privacy.
So there are two techniques under development to address that. One is to run DNS directly over TLS. The second is to run DNS over HTTPS.
Running DNS over TLS has to advantage that the semantics are clear (just DNS over TCP but then encrypted) but the downside that the port may be blocked.
DNS over HTTPS is unlikely to get blocked, but there are too many ways to transmit DNS over HTTPS, so it may take some time for that to get sorted out.
Of course, moving DNS from a lightweight UDP exchange to TLS or HTTPS requires quite a bit more resources on the server side.
So, local DNSSEC validation works. It is just matter of turning it on. Server side, if the admins are behind a DNSSEC validating resolver then they quickly figure how to avoid breaking it.
When it comes to privacy, if you send all your DNS queries to Google, who else do you care about who might be watching your DNS traffic?
In my case it falsely report that everything is ok because I have uMatrix enabled, which prohibits outside requests. When I disable it, it shows "not protected".
I'm in Silicon Valley, and outside of SF (where the ISP market isn't a monopoly), a gigabit from Comcast costs ~$300/mo. It still seems absurd to me that I'm 20 minutes from companies like Apple and Google, but getting decent ≥100Mbps Internet is expensive and challenging.
(I pay for "business class" however, so my bill is slightly more expensive per Mbps because of that (the $300/mo above is residential, though). But I get an almost nearly static IPv4 address, and customer support that's only moderately bad, as opposed to the residential level support which beyond bad.)
Romania (and if my memory serves me right, a lot of eastern european countries) have heavily invested into their internet infrastructure. Huge costs for their government, but it is paying off. As a result, almost the entire country has high speed links for dirt cheap.
It's somewhat similar throughout Europe too. Speeds may vary. Prices are relatively low. Paying 29€/month for whatever your line is able to supply is common in France. Regrettably, due to our choice of investing into copper lines heavily, our infrastructure is starting to get old. For example, I am getting 8Mbps/1Mbps and it's not likely to change soon.
The government didn't get involved at all in the infrastructure here. There were thousands of small ISPs about 10-15 years ago that got bought by the two big ones. Then the two big ISPs switched everything to fiber and increased speeds while decreasing prices.
My ISP was so small that it required me to lay my own cables and get a router.
Mine is, and I live in the UK also. I'm not sure what joke you're making. Sure, we may not have the best bandwidth (although at my previous house I had 250MBit), but supporting IPv6 (etc) has nothing to do with being in the UK. Find a decent ISP, I recommend Zen (or if you can afford them, AA).
Depends where you live. In Storrington the internet is rubbish and no phone signal. This is 2017 UK. Driving to London from that region you lose mobile signal at least 5 times completely.
People act so entitled when they live in cities; I happen not to like cities which makes me a minority, but there are plenty of wealthy business people crying every day about their connection south of London (and probably in more places; most places around Exeter aren't great either).
My parents live in a Dorset village and have the same issues. I'm on Three and even in the nearest towns (Dorchester, pop. 20k and Weymouth, pop. 50k) there is usually no or a very weak signal. They finally rolled out BT Infinity last year, so at least that's something.
I live in Lithuania now and it really shocks me how bad the UK is for these things. Here I have 600/600 FTTH for €20/month and LTE is basically universal, even in remote parts of the country.
It's cheap in Lithuania and other countries because they were was no significant prior investment in telecoms infrastructure, and the costs of deployment are generally lower too (cheaper labour, easier planning-permission) - so when it comes to deploying Internet access to a previously disconnected community it only makes sense to roll-out the bleeding-edge technology (e.g. FTTH).
Whereas in the UK, BT was/is obsessed with squeezing every last drop of bandwidth from POTS connections - because the cost of upgrading everyone's last-mile connections from copper (or even aluminium in some cases) to fibre is very cost-prohibitive: look at the sheer cost the cablecos shouldered during the mass roll-out of coax in the early-1990s (and even then, it was only to boxes in the street, not houses) - I understand their near-bankruptcy from this move lead to them all coming together under NTL and Telewest, and then Virgin Media.
(The only thing that is inexplicable is how even modern, brand-new housing developments still have unshielded copper last-mile connections instead of FTTH: they don't even lay conduits to make it easier for possible future FTTH... idiocy)
Give the UK a few more years and there should be a mandate from above requiring FTTH and we'll see progress: maybe even 10Gig FTTH as standard, then the tables will turn and people in Lithuania will be stuck with their 1Gbps service until their next round of major infrastructure investment, potentially decades away.
(I'm aware that Fibre is generally more future-proof than copper, and a high-quality fibre line that handles 1Gbps today can easily handle 10Gbps, and potentially 40Gbps or even 100Gbps in the future - so my entire argument may be moot)
BT was/is obsessed with squeezing every last drop of bandwidth from POTS connections ... that wasn't what they wanted to do at all.
BT were preparing to do FTTH when I joined them in 1994 (I left in 2001). This was as you say going to be eye-wateringly expensive because BT have a universal provision requirement - they couldn't upgrade the network in the cities and not do it in the countryside. The idea was to pay for this by providing television services, but OfTel (now OfCom) said this would be unfair competition with the cable providers - who were cherry picking cities to make rollout cheaper. They would also have been in competition with Sky, which meant the Murdoch press lobbying against BT (among others; the media market is always a tangle of interests)
Additionally, local-loop unbundling (ie ADSL) was being proposed; BT were required to allow access to the last-mile network from in-exchange equipment, and do this at line rental prices that undercut themselves, in order to break their monopoly. OfTel were very likely to make the same requirement for FTTH/FTTC.
Of course, you pays your money you makes your choice - if BT had been allowed to go ahead with their TV services back then, we might've had FTTH way sooner, but BT probably still would have had a monopoly.
Source: I met the engineers doing FTTH on my first visit to Ipswich, I was part of the team working on the local-loop unbundling ordering systems (where other providers booked engineering time at exchanges) and gave presentations to them at OfTel's offices.
> Whereas in the UK, BT was/is obsessed with squeezing every last drop of bandwidth from POTS connections - because the cost of upgrading everyone's last-mile connections from copper (or even aluminium in some cases) to fibre is very cost-prohibitive
This is especially frustrating if you have a line that's directly connected to an exchange - you don't even benefit from the FTTC upgrades. Download-wise I can't complain too much - ~20Mbps is fine most of the time (though with family members that tend to leaving streaming video running constantly and various game consoles that auto-update almost constantly it's not ideal), but the sub-1Mbps upload speed is terrible. If I've anything large to upload, it's usually faster to take it to my grandparents' house - connected to the same exchange, but get a order of magnitude greater upload speeds because they are connected via a cabinet.
> (The only thing that is inexplicable is how even modern, brand-new housing developments still have unshielded copper last-mile connections instead of FTTH: they don't even lay conduits to make it easier for possible future FTTH... idiocy)
Reminds me a story my granddad told me from the 60s/70s (not sure exactly when it was). They'd just finished constructing a new road, laid all the conduits under the road for the various utilities, left them plainly labelled (IIRC it was also pre-planned with the companies, but not certain).... then came back two weeks later to find multiple utility companies had dug up parts of the road to lay their own and done a rough job of patching it back up. He was (understandably) less than impressed!
One of the things that really helped is that all passive telecommunications infrastructure is by law "common use" - so things like ducts, pipe work, man holes, poles, etc can be used by any company. This has really helped to level the playing field so a single company doesn't have an unfair monopoly because it was there first (cough BT cough). Where I'm living right now cable and DSL was available (maybe up to 50mbit?), but last year fibre was rolled out by a different company. There are also guidelines on how the infrastructure should be delivered within buildings, so most apartment buildings have duct work going from the basement to the top floor, and space for the providers equipment for future upgrades.
Where I live in the states, telephone pole access is "common use" but the bureaucracy around actually being able to do so makes it pretty much impossible to add new lines (eg: needing to get a expensive environmental review for adding a wire to a pole that already has wires). Last I checked it took about a dozen permits which took ~12-24 months to get. After you got the permits, you then needed to pay for the inspection and full replacement of any poles found to be old/substandard that you wanted to attach to.
I can get an unreliable 3Mbps with the wind in the right direction on Openreach, or anything up to 200Mbps on Virgin Media. I'd rather not use VM's heavily filtered IPv6-free zone, but it's not a question of not being able to afford a decent ISP, it's just practicality.
I use Virgin Media and have no problems on any of my devices. Downlink can be a bit slower at busy times, but I guess that's nature of cable internet and uplink is always at the limit. And I really like that they very rarely change IP addresses (have mine for at least 3 months now).
I have the feeling that Sky has slightly better peering (more stable speed to US & Asia during peak hours), but the higher speed on VM is more important here, and ping times are generally very low.
What do you mean with IPv6-free zone? Have ipv6 disabled on my PC (for different reasons) but haven't experienced any connectivity problems on either Computers or other devices (which should be able to use IPv6). IF you mean missing availability of ipv6, I don't think that there are any pages you can't see on ipv4?
IF you mean missing availability of ipv6, I don't think that there are any pages you can't see on ipv4?
And there won't be while ISPs are lagging in their adoption, meaning nobody can set up a IPv6-only site if they expect to be accessible by everyone.
Also, there's more than sites: an IPv4-only client can't connecting directly (P2P) to other clients behind carrier-grade IPv4 NAT, which leads to more centralized systems (and which give an advantage to large companies over more independent developers and open source groups).
These ISPs are holding everyone back, hence the site submitted in this thread.
I know that NATs were not designed as security features, but I'm not sure if we want to have every device out there to have an IP address without NAT. I think that this would bear massive potential for botnets to take over older machines. And replacing NATs with firewalls would ultimately lead to the same problem with P2P.
It's unfortunate for people with more technical knowledge, but most people don't have that, and there is point protecting them from attacks (even if it's their fault that they didn't update).
Meanwhile irssi just removed support for DANE in the irc client which I believe means that there are now zero irc clients that will attempt to validate you aren't talking to a rogue irc server. Wasn't irssi the first and only to implement it?
DNSSEC is dead on arrival. Nobody actually wants it.
The number of websites unreachable for not having IPv6 equals 0, so saying your internet is not "up to date" because you don't have IPv6 doesn't mean much
Two things. First, it is nice if you don't have to allocate ports on a NAT box to make a test system available. These days you can't really count on all non-production systems having public IPv4 addresses anymore.
Obviously that only works if all systems that need access have IPv6.
However, the main killer app for IPv6 is your ISP running out of IPv4 addresses. Carrier grade NAT boxes are expensive and introduce all kinds of issues. Better to move as much traffic to IPv6 as possible.
If at some point IPv6 traffic is the vast majority of the traffic for a website, then IPv4 traffic engineering may start to suffer. So technically the site will be reachable over IPv4 for a very long time. But it may be that at some point performance will be a lot worse then over IPv6.
"Protected from redirection to false IP addresses (DNSSEC)"
What does that mean? It means that whatever other DNS server I use seems to verify DNSSEC signatures (I use Google's DNS fwiw). Yet this doesn't provide any reasonable sense of protection, as the connection to that DNS server may very well be compromised.
This would very well show DNSSEC protection in an open public wifi if the provider decided to enable DNSSEC.